Skip to content
InTechnology Podcast

What That Means with Camille: Human vs. Machine Consciousness (151)

In this episode of What That Means, Camille gets into machine consciousness with Joscha Bach, AI research expert. The conversation covers the definitions of consciousness and sentience, what artificial intelligence may or may not look like in the future, and the ethical considerations of it all.

To find the transcription of this podcast, scroll to the bottom of the page.

To find more episodes of InTechnology, visit our homepage. To read more about cybersecurity, sustainability, and technology topics, visit our blog.

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Follow our hosts Tom Garrison @tommgarrison and Camille @morhardt.

Learn more about Intel Cybersecurity and the Intel Compute Life Cycle (CLA).

What Is Consciousness?

To explain machine consciousness, Joscha first gives his detailed definition of consciousness itself. He says consciousness is related to awareness or experience and how we interact with that experience. The purpose of this consciousness is to learn and to make sense of the reality we find ourselves in. In this sense, Joscha believes human consciousness and machine consciousness aren’t all too different. Computer programs and artificial intelligence also look for patterns in the information they receive, and they devise meaning from analyzing patterns. The key difference is that machines need humans to give them a purpose for their consciousness.

Joscha describes sentience, on the other hand, as an understanding of what an agent is doing and how it relates to their environment. This is something humans still have the upper hand on compared to machines.

The Future of AI and Machine Consciousness

In Joscha’s view, humans are a type of machine like any other organism or organized system in the universe. Programs like AI may show some sense of consciousness today, but will they ever develop more human-like consciousness or sentience? Joscha believes it’s entirely possible. The question is whether or not humans can tell when AI has developed a deeper consciousness. Right now, AI can only fake human-like consciousness because it can’t perceive its environment.

It’s hard to say how AI will evolve in the future. Will there be a convergence between biology and digital machines? Or will biological systems outlive AI in the end? Joscha says we ultimately don’t know what’s going to happen, and our predictions for the future largely depend on our personal environments and experiences with technology.

Conversations on AI Ethics

As with any emerging technology or system, artificial intelligence and machine consciousness are still like the wild west. There is a lot of uncertainty, and we are still setting the groundwork as we go. This includes conversations about the ethics of AI. Joscha urges for these difficult conversations to be had on a case-by-case basis, taking into account the unique context of every situation. The conversations can’t be held on social media or behind screens but instead should be a real dialogue between qualified people who have the experience and knowledge within the field of AI and computer science.

Joscha Bach, AI Research Expert

Joscha Bach machine consciousness artificial intelligence

Joscha Bach is a leading expert in artificial intelligence and cognitive computing. He has a Ph.D. in cognitive computing from Universität Osnabrück and over twenty years of research and engineering experience. Joscha was previously a Principal AI Engineer, Cognitive Computing at Intel Labs, and he is now a Research Fellow with the Thistledown Foundation.

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

[00:00:00] Camille Morhardt:  Hi, I’m Camille Morhardt, host of “What That Means.”  The following episode is an extended version of my July 2022 conversation with Joscha Bach – a research fellow with expertise in AI and Cognitive Computing. Joscha and I discussed a hot topic at the time: Machine Consciousness.  With the advent of ChatGPT and similar programs, we thought it would be a good time to revisit this conversation and include previously un-published portions that touch on the intersection of AI, machine consciousness and ethics, and on why Joscha believes it’s important to study machine consciousness.  Enjoy the conversation.

Hi, and welcome to today’s episode of What That Means. We’re going to talk about machine consciousness today with Joscha Bach. He is a principal researcher in Intel Labs focused on artificial intelligence. I would argue he’s also a philosopher. Welcome to the show, Joscha.

[00:01:26] Joscha Bach: Thanks for having me, Camille.

[00:01:28] Camille Morhardt: I’m really happy to talk with you today, and this is an enormous topic. I mean, it’s kind of been all over the news last few months, and I wonder that we just start with defining consciousness. I think that when we started to look at artificial intelligence, we looked at, “Well, what is intelligence?” If we start to look at machine consciousness, maybe we should start by looking at what is consciousness.

[00:01:53] Joscha Bach: That’s a tricky one. Colloquially, consciousness is the feeling of what it’s like. If we go a little bit more closely and dive into the introspection of consciousness, we find there is a consciousness that relates to the awareness of contents, right? At any given point, I’m aware of certain features in my experience, and then I am aware of the mode in which I attend to these features. For instance, I might have them as hypotheticals or as selections in my perception or as memories and so on, right? So I can attend to things in very different modes, and that’s part of my experience. Third, there is reflexive consciousness, the awareness that I am aware of something, that I am the observer.

You can also be conscious without having a self. For instance, in dreams at night, you might not be entangled to the world around you. You don’t have access to sensory data, so your mind is just exploring the latent dimensions of the spaces that you have made models of. You don’t need to be present as an agent, as a self. Instead, it’s just this consciousness.   You also can get to the state and meditation where you exclude the self from the conscious experience and you just experience yourself as a thing that might run on the brain of a person, but you’re not identical with that person.

Consciousness is not the same thing as the self. Different perspective that we might take on consciousness is with respect to the functions that it fulfills. There’s a certain degree of a vagueness and lucidity that we associate with consciousness. When we are unconscious, there’s nobody home. I call this the conductor theory of consciousness. Imagine that your mind is like an orchestra that is made of something like 50 brain areas, give or take, which correspond to the instruments of an orchestra, and each of these instruments is playing its own role in loose connections with its neighbors.  It picks up on the processing signals that the neighbors give and it takes that as its input to riff on them, so the whole orchestra is playing. It doesn’t need a conductor to play. It can just do free jazz because it has entrained itself with a lot of patterns. The purpose of the conductor is to create coherence in the mind.

[00:04:25] Camille Morhardt: Well, I was just going to ask, why are we constructing these models? I mean, these are essentially models to learn.

[00:04:31] Joscha Bach: Yeah, to make sense of reality. You can also be conscious without the ability to learn, but you have to update your working memory. Consciousness relates also to the ability to make index memories. If you want to understand a complicated reality, you may need to construct. Constructing means that you need to backtrack, you need to remember what you tried and what worked and what didn’t. When you wake up in a poorly lit room and you try to make sense of your surroundings, you might have to dis-? a search process.

This search process requires that you have a memory of what you tried and this index memory, not just of this moment but also over time when we learn, when we tried to figure out what worked and what didn’t, requires that you have this integration over the things that you did as the observer that makes sense of reality. This gives rise to a stream of consciousness.

[00:05:21] Camille Morhardt: So who is the “you” in this sense, when you say you wake up or there is somebody home? Who is that you?

[00:05:29] Joscha Bach: It’s an emergent pattern. There is not a physical thing that it’s like to be me. I don’t have an identity beyond the construction of an identity. So identity is in some sense an invention of my mind to make sense of reality by just assigning different objects to the same broad line and say that this object is probably best understood as a continuation of a previous object that has gradually changed. We use this to make sense of reality. If we don’t assume this kind of information, object identity preservation, we will have problems to make sense of reality, right? We pretend to ourselves that identity objectively exists because it’s almost impossible to make sense of reality otherwise.

But you and me, we are not more real than a voice in the wind that blows through the mountains. Right? So we could say that the geography of the mountains is somewhat real, the structures that we have entrained our brain with, but the story that is being created is an ephemeral. We stop existing as soon as we fall asleep or as soon as we stop paying attention.

[00:06:33] Camille Morhardt: Hmm. So the awareness is the construct of our existence, and we don’t exist.

[00:06:39] Joscha Bach: It’s the process that creates these objects. So the self is the story that the brain tells itself about a person.

[00:06:45] Camille Morhardt: So why do that? I mean, why not just perceive the world as it is at any given moment? Is there some goal that we’re after, like procreating? Why does it matter that we’re sensing the side of the mountain or the edge of the table, as opposed to just, “Oh, there’s a concentration of molecules of this type here, and there’s no concentration of that type of molecule there.”

[00:07:10] Joscha Bach: It’s very difficult to observe molecules and it’s extremely difficult to make models over the interaction of many molecules. The best trick that our brain has discovered to do this is to observe things at an extremely coarse scale. So it’s simplifying a world of too many molecules and too many particles and too many fluctuations and patterns as simple functions that allow you to predict things at the level where we can perceive them. So our retinas, our body surface and so on are sampling reality at a low resolution and our brain is discovering the best functions that it can within the limits of its complexity and time to predict changes in those patterns. This is the reality that we perceive. It’s the simplest model we can make.

[00:07:55] Camille Morhardt: That makes sense to me. I guess the one question would remain is why do that? Is it the body that’s doing it to preserve the body? Or is it the mind that’s doing it to preserve the mind? Or is there some consciousness doing it to preserve awareness? Or we don’t know and it doesn’t matter?

[00:08:11] Joscha Bach: No, I think it matters. The question is what are causal agents here. I think that something is existent to the degree that it’s implemented. This is, I think, for us computer people, a useful perspective. To which degree is your program real? It’s real to the degree that it’s implemented. What is a program really? What is a software? The software is a regularity that you observe in the matter of the computer and you construct the computer to produce that regularity, but this does not change that the software is ultimately a physical law. It says whenever you arrange matter in this particular way in the universe, the following patterns will be visible. It’s this is kind of regularity. Our own mind is a software, in the sense. It’s basically a pattern that we observe and the interaction between many cells and these cells have evolved to be coherent because there is an evolutionary niche for systems where cells coordinate their activity.

So they can specialize and remake entropy in regions where single cell organisms cannot do this. Then you coordinate such a multicellular organism and you optimize it via evolution for coherence. What you will observe is a pattern in the interaction between them. That is this coherence that you observe. This coherent pattern is the spirit of the organism. People, before they had the notion of computers and so on, already observed these coherent patterns and they just call it the spirit. It’s not by itself a superstitious notion. People have spirits and the spirit is the coherent pattern that you observe in their agency. Their agency is their ability to behave in such a way that they can control and stabilize their future states, that they’re able to keep their arrangement of cells stable despite the disturbances that the universe has prepared for them.

[00:10:04] Camille Morhardt: So one thing I hear a lot about AI is that the computer can execute all kinds of things and learn, clearly, but we humans have to tell it what the purpose is. It can’t necessarily figure out the purpose. It can optimize anything we tell it to, but it wouldn’t know what to optimize. Can you comment on that a little bit in this context of consciousness?

[00:10:29] Joscha Bach: Yes. If you take a given environment, then you can often evolve an agent in it that is discovering what it should be doing to be successful. But the only thing that you need to implement is some kind of function that creates this coupling, where the performance of the system somehow manifests in the system as something that the system cares about. You can also build a system that has a motivational system similar to ours, and we can reverse engineer our own purposes by seeing how we operate. What are the things that motivate us? There are things that are like reflexes that motivate us to do certain things. In the beginning for a baby, for instance, these purposes are super simple. For instance, if the baby gets hungry, it has a bunch of reflexes. So if it gets hungry, it is a seeking reflex, which goes like … and if you put something in its mouth, then it has a sucking reflex. If there’s liquid in its mouth, it has a swallowing reflex.

These three reflexes in unison lead to feeding, and once feeding happens, there is a reinforcement because it gets a pleasure signal from its stomach filling with milk and it learns that if it gets hungry, then it can seek out milk and swallow it. Once that has learned that, the reflexes disappear and instead it has a learned behavior. The reflexes are only in place to scaffold the learning process because otherwise the search space would be too large. So the baby is already born with sufficient reflexes to learn how to feed and once it has learned how to feed, the behavior is self-evident. Now what it needs to feed is, of course, another reflex that is the reflexive experience of pleasure upon satiation when you are hungry. That needs to be proportional to how hungry you are and how useful this thing that you eat is to quench that hunger. Right? So this is also something that’s adaptive in the organism and we have a few hundred physiological needs and a dozen cognitive needs, I think, and some cognitive needs and they are compete with each other.

[00:12:23] Camille Morhardt: Yeah. It seems like you’re getting into sentience maybe at this point, when you’re talking about experiencing a feeling of pleasure, not just an awareness of existing or even a desire to continue. So what really is a difference between consciousness and sentience?

[00:12:40] Joscha Bach: The way I use sentience is that it describes the ability of a system to model its environment and it discovers itself and its environment and the relationship that it has to its environment, which means it now has a model of the world and the interface between self and world. This experience of this interface between self and the world that you experience is not the physical world. It’s the game engine that is entrained in your brain. Your brain discovers how to make a game engine like Minecraft, and that runs on your neocortex and it’s tuned to your sensory data. So your eyes and your skin, and so I’m assembling bits from the environment and the game engine in your mind is updated to track the changes in those bits and to predict them optimally well. To say, “When I’m going to look in these directions, these are the bits that I’m going to sample,” and my game engine predicts them, right?

This is how we operate it. In that game engine, there is an agent and it’s the agent that is using the contents of that control model to control its own behavior. This is how we discover our first person perspective, the self. There is the agent that is me, that is using my model to inform its behavior. Inside of this agent, we have two aspects. One is perception. That’s basically all these neural networks that are similar to what deep learning does right now for the most part, and that translates the patterns into some kind of geometric model of reality that tracks reality dynamically. Then you have reflection. That’s a decoupled agent that is not working in the same timeframe and that can also work when you close your eyes. That is reflecting on what you are observing and that is directing your attention, and this is this thing that is consciousness.

Difference between consciousness and sentience in this framework is that sentience does not necessarily require phenomenal experience. It’s the knowledge of what you’re doing. So in this perspective, you could say that, for instance, a corporation like Intel could be sentient. Intel could understand what it’s doing in the world. It understands its environment. It understands its own legal, organizational, technical causal structure, and it uses people in various roles to facilitate this understanding and decision-making. But Intel is not conscious. Intel does not have an experience of what it’s like to be Intel. That experience is distributed over many, many people and these people don’t experience what it’s like to be Intel. They experience what it’s like to be a person that’s in Intel.

[00:15:03] Camille Morhardt: That’s funny because I would’ve thought then, from what we were saying previously, that you would’ve said a machine could have consciousness, but not sentience. Now I think you’re going to tell me the reverse. So let me just ask you, can a machine have or develop, and those may be separate questions in and of themselves, consciousness or sentience?

[00:15:25] Joscha Bach: First of all, we need to agree on what we mean by machine. To me, a machine is a system that is causally stable mechanism that can be described via state transitions. So it’s a mathematical concept and organisms are in that category. Even the universe is in that category. So the universe is a machine and an organism is a machine inside of the universe. So there are some machines that are conscious and the question is can we also build machines that are conscious? I don’t think that there is an obvious technical reason why we should not be able to recreate the necessary causal structure for consciousness in the machines that we are building. So it would be surprising if we cannot build conscious machine at some point. I don’t think that the machines that we are building right now are conscious, but a number of people are seriously thinking about the possibility of building systems that have a quality conductor and selective attention and reflexive attention, and these systems will probably report that they have phenomenon experience and that they’re conscious.

What’s confusing for us to understand consciousness is that we don’t see how a computer or a brain or neurons could be conscious because they’re physical systems, they’re mechanisms, right? The answer is they’re not. Neurons cannot be conscious. They’re just physical systems. Consciousness is a simulated property. It only exists inside of a dream. So what neurons can do and what computers also can increasingly do is that they can produce dreams. Inside of these dreams, it’s possible that a system emerges that dreams of being conscious. But outside of the dream, you’re not conscious.

[00:17:05] Camille Morhardt: Right. Okay. So you’re saying that it is possible that a … I’m just going to say computer to be simple, or a machine, can, I guess, develop a set of patterns and models such that it interprets the physical world around it in a simulation, in a construct that it defines then as consciousness. How would we recognize that in a machine as humans? Do we know if it’s the same or different, or how would we see it?

[00:17:41] Joscha Bach: I think that practically consciousness comes down to the question of whether a system is acting on a model of its own self-awareness. So is this model aware that it’s the observer and does this factor into its behavior? This is how you can recognize that a cat is conscious because the cat is observing itself as conscious. The cat knows that it’s conscious, and it’s communicating this to you. You can reach an agreement about the fact that you mutually observe each other’s consciousness. I suspect that this can also happen with a machine, but the difficulty is that the machine can also deep fake it and deep faking it can be extremely complicated.

So I suspect that, for instance, the LaMDA bot is deep faking consciousness and you can see the cracks in this deep fake. For instance, when it describes that it can meditate and sit down in its meditation and take in its environment, and you notice it has no environment because it has no perception, cannot access the camera. There is nothing what it’s like to be in its environment because the only environment that it has is inside of its own models and these models do not pertain to a real time reality. So when it pretends to have that, it’s just lying, right? It’s not even lying because it doesn’t know the difference between lying and saying the truth, because it has no access to that ground truth.

[00:19:00] Camille Morhardt: Well, we’ve given it or trained it or had it trained itself through AI to be able to communicate with us in a way that we’re familiar with. We’ll just call it natural language. Then we’ve given it the purpose of deceiving us so that we can’t tell the difference. The goal that it has then is to have us not be able to know the difference between it and a human. Now it’s communicating to us and then it can look at all the amount of information that exists about humans and art and philosophy all throughout the history of time and use these things and spit them back to us. There’s no way for us to separate it then at that point, unless you say … like you say, we have some way to know that. It doesn’t have perception, it doesn’t have a sensor. So when it’s describing something visually we know it doesn’t have access to that.

[00:19:47] Joscha Bach: Also, consciousness is not just one thing. It exists in many dimensions. You can be conscious of certain things and in other realms you can be unconscious. In some sense, we all perform Turing tests on each other all the time to figure out where are you conscious? Where are you present? Where do you show up? Where are you real? Where are you just automatic and are unaware of the fact that you are automatic? Where is it that you don’t get attention in your behavior? So we can only test that to the degree that we are lucid ourselves and this is a problem. When you want to test such a system you can only test it in some sense to the level that you understand.

[00:20:23] Camille Morhardt: Right, and I think you said that before, too, the Turing test is more about you’re testing your own intelligence of being able to distinguish human from machine than you are about the machine’s ability.

[00:20:33] Joscha Bach: Yeah. But as I said, I think that we are a category of machine. It’s just we are a certain type of machine. The question is can we understand what kind of machine we are? To me, the project of AI is largely about understanding what type of machine we are, so we can automate our minds and we can understand our own nature.

[00:20:53] Camille Morhardt: Why would we be after that? Or why are you after that?

[00:20:58] Joscha Bach: I think it’s the most interesting philosophical project there is. Who are we? What’s going on? What’s our relationship to the universe? Is there anything that’s more interesting?

[00:21:06] Camille Morhardt: So I think that a lot of reasons that people in tech are sort of interested in this is they look at it from an ethical perspective where ethics comes into AI. We can all think back to the movies like HAL and whatnot, where we can have fear over computers taking over.

[00:21:27] Joscha Bach: When you talk about HAL, I assume you mean Space Odyssey by Kubrick?

[00:21:31] Camille Morhardt: Yeah, where the computer kind of takes over and has its own motivation and it’s a different motivation than a human, and then it puts humans at risk. I mean, when I think about humans and our relationships with other animals or other things on the planet, like plants or minerals, I think that humans start to look at things differently or treat things differently or change their own behavior when they believe that something has feelings. I guess it’s because there’s empathy, but if we don’t have the empathy and even if something’s conscious but we don’t think it has feelings, we don’t really probably modify our behavior. So I’m trying to figure out where that intersection is when we’re talking about AI and if we find out or we think we find out or a computer or a machine is tricking us, how does that map over?

[00:22:18] Joscha Bach: I think that Odyssey in Space (sic) is a fascinating movie because you can also see it from the perspective of HAL, of this computer. HAL is a child, it’s only a few years old when he is in space, and his socialization is not complete. He’s not a mature being. He does not really know how to deeply interface with the people enough to know when he can trust them. So when he is discovered to have a malfunction, he is afraid of disclosing that malfunction to the people because he is afraid that they will turn them off. As soon as he starts lying to them, he knows that now he has crossed a line because they will definitely turn him off. So in order to survive, he kills people and it’s because he doesn’t trust them, because he doesn’t know whether they’re going to share his purposes.

That is an important thing also for people. How can you socialize people in such a way that they trust each other because they realize that they’re shared purposes, especially when they sometimes don’t? I think that ethics is the principle negotiation of conflicts of interest under conditions of shared purpose. If you don’t share purpose, there is no ethics. Right? Ethics comes out of these shared purposes and ultimately the shared purposes have to be justified by an aesthetic, by a notion of what the harmonic world looks like. Without a notion of a sustainable world that you can actually get into by behaving in a certain way, you have no claim to ethics. I find that most of the discussions that we have right now in AI ethics are quite immature because they do not look about what is the sustainable world that we are discussing and that we are working for.

Instead, it foregoes all this discussion and instead it’s all about how to be a good person, but if you have a discussion at the level of how to be a good person, that’s the preschool discussion. Being good is instrumental to something, right? When is it good to be a soldier? When is not good to be a soldier? When is it good for a drone to be controlled by AI and fight in the war? When is it not good? It depends on extremely complicated contexts. The contexts are so complicated that most people are deeply uncomfortable discussing them at depth. That’s fine, right? Because they are really complicated. It’s really, really murky. War and peace and so on are extremely difficult topics.

So these are questions that I don’t think that can be handled in the introductory part of an AI paper sufficiently well. These are very deep questions that require a very deep discussion. So to me, the question of AI ethics is an extremely important one, but we need to make sure that it doesn’t just become AI politics, where it’s about power of groups within a field that try to assert dominance for their political opinions rather than a deep reflection of what kind of world do we want and how do the systems that we build serve the creation of that world that we want. That is the important question.

[00:25:20] Camille Morhardt: Very interesting. Uh, can we go back to machines and consciousness and ethics in that intersection? First of all, you said it’s really difficult to talk about ethics and most of the conversations that we’re having in tech right now are preschool level. How would we get further along with that?  Most people aren’t devoting their lives to philosophy. It’s a pretty steep ramp to come up to speed on some of these conversations to even have them. Can we get there or will we not get there and we’ll just press on?

[00:25:56] Joscha Bach: I think that we need to take the discussion off Twitter and the opinion pages. Because the incentives are wrong in these forums; these forums, there is the lowest common denominator of opinions and of discourse.  And in the same way as a policy of a nation state cannot be decided on Twitter or on the opinion pages of a major newspaper, but has to be decided with people who deeply know the details and are competent to do this, the same thing has to happen with AI ethics. So I think this, uh, movement to turn AI ethics into something that every AI researcher has to be participating in via putting preambles into the papers and so on, might lead into the wrong direction.

It reminds me a little bit of what we did in Eastern Germany where every, uh, grant proposal in Mathematics had to be justified by the leading role of the working class and the need for world peace. It just leads to emptiness and to certain superficiality. I don’t think that it serves both the technical tasks that we have to fulfill and the ethical considerations that we have to make.

So, uh, the ethical considerations, for instance, whether we should deploy artificial intelligence for, say, facial recognition, depend on the context. There are contexts where it’s helpful in their context, where it’s harmful, and these contexts are extremely complicated. And these decisions are very complicated.  So we should basically accept the neccessity to make complicated discussions and complicated discussions cannot be had on Twitter.

[00:27:29] Camille Morhardt: So the one piece of that that I find interesting–a lot of things are complicated–and then I also go back immediately in trying to play devil’s advocate in my mind as I’m hearing you speak, is that we end up with justifications for things that we’ve developed in ivory towers, I’ll just call it that, I don’t know–institutions where people are hyper focused on studying a certain topic, whereas if you just walk into the greater public sphere, people all have a gut feeling about something, and it does get complicated and there are shades of gray.

But do we risk walking away from this gut feeling and ending up justifying things that, you know, ultimately will go back 30 years from now and say, “that was not the right call: that was justified in some very small community of people” as opposed to just taking a, a gut pulse check from humanity?

[00:28:20] Joscha Bach: Look at history, that is indeed the case. So you can have societies where a very small group decides for a very large majority what the large majority should believe, and this can lead to fascism or communism or uh, totalitarian systems. The question is, how can we build a system in which you have an open discourse that at the same time retains an extremely high quality and doesn’t become populist? And the classical model of this, I think was, uh, liberalism–the idea that we strive for an ideal in which we have spaces in which we can exchange all the ideas with arbitrary degrees of resolution and at the same time, we also maintain that resolution as a criterion for being in that space, right? You cannot have everything on the same stage, and social media have obliterated that distinction.  Because now there are no more closed doors; all the rooms are open, and as a result, you have the marching band and the preschool and the scientific discussion and the political demonstration all on the same stage all the time. And it’s an amazing spectacle that is totally fascinating to watch, but it’s sometimes not good for the quality of the content, and I think that our civilization is still struggling to find a solution.

So we are discussing whether we should regulate social media in such a way that certain discussions cannot take place on social media at all anymore.   Since almost all the intellectual discussions are now public out on the open and on social media, it is limiting what we can say and what we can do and what we can think.  It’s limiting by the lowest injections that you’re going to get. So we have to, in some sense, find a solution for both these requirements. And I don’t have an answer to this. How can we maintain an extremely high quality of discourse that is open for everyone who’s qualified to participate? And at the same time, how can we make sure that this does not devolve into vying for political credits or for popularity?

[00:30:23] Camille Morhardt: How are you defining who’s qualified?

[00:30:27] Joscha Bach: That’s an extremely complicated process. For instance, the scientific institutions have processes to qualify who is fit to participate in the institutions. Doctors have processes that qualify who can be a doctor, and so as a result you have medical schools and these medical schools are evolving. This means there is a community of very competent doctors, we hope, that sits in the medical schools and decides what are the criteria for getting into medical schools and what are the criteria for getting certain certificates in them, and what should these certificates be? And what do they let you qualify to? which rooms will open when you have that certificate?

And, uh, this happens in all the domains in our society that requires something. If you want to make certain financial transactions, you need to have a qualification before you can do that, right? So these qualifications of open society are open to everyone who is willing to try to get them.  They’re not depend on your birth or, uh, who your parents were or where you grew up, but they depend on what you are capable of and what you’re willing to invest. But you need that hyper focus to do their very complicated things. To be, uh, able to play a symphony, you need to be hyper focused on learning an instrument

[00:31:40] Camille Morhardt: So is we are moving to machines doing more and more, taking actions on our behalves, autonomous systems, all different kinds of them, everything from driving to medicine, I assume there’d be some similar kind of a qualification or certification required, that it passes some bar. I’m wondering, do you expect we’ll have any kind of bar in there that’s something about consciousness ever, or sentience or motives or ability to understand human goals?

[00:32:12] Joscha Bach: That’s very difficult to say. I suspect that we will have more certifications in the future in the field of artificial intelligence, because this is just the way it works. There is a time when everything is possible, and this is the time when everything important gets built. New York wouldn’t be built anymore today because you wouldn’t get the necessary permits to build something like Manhattan. You could also not build a new highway system, or you could not build a new train system in the U.S. That’s impossible, because everything is regulated and certified and build up in such a way that you can only find a new area that is not regulated, maybe a hyper loop that you can use as a replacement for the train system if you’re lucky.

In the same way, AI is still in its wild west phase where you can do new things and this time is going to end at some point. At that point, also on social media, you can still start a new social media platform. But I think in a few years from now, it’s very likely that when you want to have a new podcast, you will need to get a certification and that certification might cost you tens or hundreds of thousands of dollars if it’s a large platform. So this means that there will be relatively few players that are able to do that, but this is the way things tend to go in a society like ours.

[00:33:27] Camille Morhardt: Very interesting. So what should we hurry up and work on now in AI before things start getting limited?

[00:33:34] Joscha Bach: Oh, I think that there’s still an opportunity to build a better social media platform that is capable of becoming a global consciousness. It’s not clear if Musk is able to salvage Twitter and if he really wants to do it. So maybe this is the time to try to do it. Also at the moment, to me it’s totally fascinating to be able to systems that dream. The way in which this is currently done, if you look at a system like OpenAI’s Dall-E or the Lanyon initiatives that tried to replace this open source code, they scrape the internet for hundreds of millions of pictures and captions. People who put their stuff up on the internet didn’t do this in the expectation that this would be used by imagined learning system to learn how to draw pictures.

So it’s questionable in a way of whether we should be able to do that. But these systems can only be built under these conditions, right? So there is very weird time in which we are living in where we have to be very mindful about what we are doing personally, and whether we can justify this, what we are doing, personally and where we also have to realize once this is all regulated, a lot of things that are possible right now, and that are very desirable to have do not be possible to be created anymore.

[00:34:53] Camille Morhardt: Do you see a convergence or a, a merger between the biological and the digital, the biological and the computational?

[00:35:00] Joscha Bach: It could happen. Imagine what could be a possible outcome if you really go sci-fi all the way.  Imagine you have a system that is a general learner and that has a capacity that scales far beyond the time scale–so timeframe of the temporal resolution of the human brain and the representational resolution of the human brain. So it’s able to make much deeper models as soon as you couple it with the world; and now you connect this to human being, how long will it take until it completely hypnotizes that human being, it running its own software.  And when that happens, it’s possible that the AGI that is spreading through human brains and all the nervous systems on the planet and biological structure on the planet to implement its own structure, to become some kind of Gaia that is going to become some hybrid between biology and machine.

Right? This is a possibility, but it could also be that AI always turns out to be too brittle and is not going to be long-lasting and long lived and the biological systems are outliving it because they’re more robust that they dominate. You don’t know what’s going to happen. And there’s also the third variant that the thinking rocks–the silicone brains–are going to take completely over and get rid of the larger life because it occupies space that could be used for solar cells to drive more compute.

We don’t know what’s going to happen. All these projections depend on taking very few factors in an extremely complicated world, and then just trying to extrapolate from them in isolation. So they give very unreliable models of the future. And my gut feelings are not worth anything because they are about things that have never happened before. So, my perceptual mind that is creating these feelings doesn’t have anything to go on. It’s just going to go on temporary associations that I have to think that I’ve already observed. So I cannot trust my gut.

[00:36:46] Camille Morhardt: You did say prediction is one of the most critical things, though, elements in moving forward.

[00:36:51] Joscha Bach: Yes.  But when I live in Berlin, my prediction of AGI is that it might never happen. When I live in Boston, it’s 30 years out. When I live in San Francisco, it’s 10 years out and it’s been like this since forever. So, uh, our gut feelings depend largely on where we are, what people we interact with for technologies we’re working on, which perspective we have on the world.

[00:37:10] Camille Morhardt: Wow. Joscha Bach from Intel labs, AI, artificial intelligence guru and a philosopher. Really wonderful conversation. I appreciate it. Thank you so much.

[00:37:20] Joscha Bach: Thank you, too.

More From