Skip to content
InTechnology Podcast

#105 – What That Means with Camille: Machine Consciousness

In this episode of Cyber Security Inside What That Means, Camille takes a philosophical journey through machine consciousness with Joscha Bach, Principal AI Engineer, Cognitive Computing at Intel. The conversation covers:

  • What machine consciousness is and how it is defined by the people working on it.
  • What it means to be conscious as a human, and what a machine really is.
  • The difference between consciousness and sentience, and how we can tell when a computer has reached either.
  • The ethics of artificial intelligence and machine consciousness, and what to consider as we continue to develop these technologies.

And more. Don’t miss it!

 

To find more episodes of Cyber Security Inside, visit our homepage at https://intechnology.intel.com. To read more about cybersecurity topics, visit our blog at https://intechnology.intel.com/blog/

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

 

Here are some key takeaways:

  • What is machine consciousness? Well, first it might be worthwhile to figure out exactly what “consciousness” is first. Joscha says that it relates to awareness or experience and how I interact with that experience. It is also what it feels like to be.
  • So what is the purpose of creating consciousness? To learn. To make sense of reality. To make index memories, and to construct new ideas to understand a complicated reality. Does this relate to identity? Joscha thinks that identity doesn’t really exist, except to help us make sense of reality. Essentially, as Joscha says, “The self is the story that the brain tells itself about a person.”
  • We have to perceive the world in a simpler form. There are far too many molecules for us to take in all of that information and process it. So we do so as simply as we can, at a lower resolution as humans. 
  • But why do humans do this? Some say to preserve their future, others say to optimize for evolution, and others still have different ideas. Programs and AI do this as well. They look for patterns and develop meaning from those patterns.
  • Many say that the key difference between AI and humans is that the machines need to be given a purpose for their consciousness. However, if you put an agent in an environment, it will discover what it needs to be successful in that environment. It will use its reflexes to motivate it to do certain things.
  • What is the difference between consciousness and sentience? Joscha uses sentience to talk about self discovery in an environment and how an agent relates to that environment. Sentience is the knowledge of what you are doing and how it relates to your environment. 
  • The two elements inside an agent are perception and reflection. Perception is taking the patterns you are seeing and creating a model to track reality. Reflection doesn’t necessarily work in the same timeframe, and is about directing your attention and reflecting on what you are observing. That is the part that is consciousness.
  • What is a machine? Joscha says anything that is stable and can be described via state transitions, including organisms. The big question is, can we create a machine that is conscious? He thinks there is no reason why not at some point.
  • But how will we tell when it has consciousness? When it is aware that it is the observer and that is a factor in how it behaves. Joscha claims that LaMDA bot is faking consciousness, because it has no perception and its models don’t pertain to real time reality.
  • For Joscha, the biggest artificial intelligence project is about understanding what kind of machine we are, as humans, so that we can automate ourselves and understand our nature. Why? He says, “I think it’s the most interesting philosophical project there is. Who are we?”
  • When talking about ethics in AI, it is important to have a shared purpose. There can’t be ethics without a shared purpose, since the ethics come from that very purpose. People are afraid of robots taking over the world because they don’t trust computers. But in a movie like The Space Odyssey, HAL kills because he doesn’t trust people to not turn him off. 
  • The context of what is good, bad, or ethical is so situational and has extremely complicated contexts. So much so that people are uncomfortable really discussing them. Because of that, they are questions that are important to AI, but aren’t possible to discuss as an introduction to AI. There needs to be a deep reflection on how we want the world to be and how we create that world before we can discuss AI ethics.
  • AI is still in a phase where development and testing and trying things can happen. In the US and in the world in general, things tend to get regulated. You need certifications to do things after a period of time and things become more exclusive. Because AI is not there yet, there is still opportunity to build better platforms and change things for the better before it is much more difficult to do so.

 

Some interesting quotes from today’s episode:

“We pretend to ourselves that identity objectively exists because it’s almost impossible to make sense of reality otherwise. But you and me, we are not more real than a voice in the wind that blows through the mountains, right? So we could say that the geography of the mountains is somewhat real, the structures that we have entrained our brain with, but the story that is being created is ephemeral. We stop existing as soon as we fall asleep or as soona s we stop paying attention.” – Joscha Bach

“Our retinas, our body surface, and so on, are sampling reality at a low resolution and our brain is discovering the best functions that it can within the limits of its complexity and time to predict changes in those patterns. This is the reality that we perceive. It’s the simplest model we can make.” – Joscha Bach

Our own mind is a software, in the sense. It’s basically a pattern that we observe and the interaction between many cells. And these cells have evolved to be coherent because there is an evolutionary niche for systems where cells coordinate their activity.” – Joscha Bach

“People, before they had the notion of computers and so on, already observed these coherent patterns and they just call it the spirit. It’s not by itself a superstitious notion. People have spirits and the spirit is the coherent pattern that you observe in their agency. Their agency is their ability to behave in such a way that they can control and stabilize their future states, that they’re able to keep their arrangement of cells stable despite the disturbances that the universe has prepared for them.” – Joscha Bach

“To me, a machine is a system that is causally stable mechanism that can be described via state transitions. So it’s a mathematical concept and organisms are in that category. Even the universe is in that category. So the universe is a machine and an organism is a machine inside of the universe. So there are some machines that are conscious and the question is can we also build machines that are conscious?” – Joscha Bach

“What’s confusing for us to understand consciousness is that we don’t see how a computer or a brain or neurons could be conscious because they’re physical systems, they’re mechanisms, right? The answer is they’re not. Neurons cannot be conscious. They’re just physical systems. Consciousness is a simulated property.” – Joscha Bach

“I think that practically consciousness comes down to the question of whether a system is acting on a model of its own self awareness. So is this model aware that it’s the observer and does this factor into its behavior? This is how you can recognize that a cat is conscious because the cat is observing itself as conscious.” – Joscha Bach on how we would know a machine has consciousness

“We all perform Turing tests on each other all the time to figure out where are you conscious? Where are you present? Where do you show up? Where are you real? Where are you just automatic and are unaware of the fact that you are automatic? Where is it that you don’t get attention in your behavior? So we can only test that to the degree that we are lucid ourselves and this is a problem.” – Joscha Bach

“So there is very weird time in which we are living in where we have to be very mindful about what we are doing personally, and whether we can justify this, what we are doing, personally and where we also have to realize once this is all regulated, a lot of things that are possible right now, and that are very desirable to have, will not be possible to be created anymore.” – Joscha Bach

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

[00:00:36] Camille Morhardt: Hi, and welcome to today’s episode of What That Means, part of the Cyber Security Inside Podcast. We’re going to talk about machine consciousness today with Joscha Bach. He is a principal researcher in Intel Labs focused on artificial intelligence. I would argue he’s also a philosopher. Welcome to the show, Joscha.

[00:00:56] Joscha Bach: Thanks for having me, Camille.

[00:00:58] Camille Morhardt: I’m really happy to talk with you today, and this is an enormous topic. I mean, it’s kind of been all over the news last few months, and I wonder that we just start with defining consciousness. I think that when we started to look at artificial intelligence, we looked at, “Well, what is intelligence?” If we start to look at machine consciousness, maybe we should start by looking at what is consciousness.

[00:01:23] Joscha Bach: That’s a tricky one. Colloquially, consciousness is the feeling of what it’s like. There is a certain kind of experience that we have that makes consciousness very specific and distinct and so we know it indexically by pointing at it. If we go a little bit more closely and dive into the introspection of consciousness, we find there is a consciousness that relates to the awareness of contents, right? At any given point, I’m aware of certain features in my experience, and then I am aware of the mode in which I attend to these features. For instance, I might have them as hypotheticals or as selections in my perception or as memories and so on, right? So I can attend to things in very different modes, and that’s part of my experience. Third, there is reflexive consciousness, the awareness that I am aware of something, that I am the observer.

You can also be conscious without having a self. For instance, in dreams at night, you might not be entangled to the world around you. You don’t have access to sensory data, so your mind is just exploring the latent dimensions of the spaces that you have made models of. You don’t need to be present as an agent, as a self. Instead, it’s just this consciousness. 

Consciousness is not the same thing as the self. Different perspective that we might take on consciousness is with respect to the functions that it fulfills. There’s a certain degree of a vagueness and lucidity that we associate with consciousness. When we are unconscious, there’s nobody home. I call this the conductor theory of consciousness. Imagine that your mind is like an orchestra that is made of something like 50 brain areas, give or take, which correspond to the instruments of an orchestra, and each of these instruments is playing its own role in loose connections with its neighbors.  It picks up on the processing signals that the neighbors give and it takes that as its input to riff on them, so the whole orchestra is playing. It doesn’t need a conductor to play. It can just do free jazz because it has entrained itself with a lot of patterns. The purpose of the conductor is to create coherence in the mind.

[00:03:41] Camille Morhardt: Well, I was just going to ask, why are we constructing these models? I mean, these are essentially models to learn.

[00:03:47] Joscha Bach: Yeah, to make sense of reality. You can also be conscious without the ability to learn, but you have to update your working memory. Consciousness relates also to the ability to make index memories. If you want to understand a complicated reality, you may need to construct. Constructing means that you need to backtrack, you need to remember what you tried and what worked and what didn’t. When you wake up in a poorly lit room and you try to make sense of your surroundings, you might have to dis-? a search process.

This search process requires that you have a memory of what you tried and this index memory, not just of this moment but also over time when we learn, when we tried to figure out what worked and what didn’t, requires that you have this integration over the things that you did as the observer that makes sense of reality. This gives rise to a stream of consciousness.

[00:04:37] Camille Morhardt: So who is the you in this sense, when you say you wake up or there is somebody home? Who is that you?

[00:04:45] Joscha Bach: It’s an emergent pattern. There is not a physical thing that it’s like to be me. I don’t have an identity beyond the construction of an identity. So identity is in some sense an invention of my mind to make sense of reality by just assigning different objects to the same broad line and say that this object is probably best understood as a continuation of a previous object that has gradually changed. We use this to make sense of reality. If we don’t assume this kind of information, object identity preservation, we will have problems to make sense of reality, right? We pretend to ourselves that identity objectively exists because it’s almost impossible to make sense of reality otherwise.

But you and me, we are not more real than a voice in the wind that blows through the mountains. Right? So we could say that the geography of the mountains is somewhat real, the structures that we have entrained our brain with, but the story that is being created is an ephemeral. We stop existing as soon as we fall asleep or as soon as we stop paying attention.

[00:05:49] Camille Morhardt: Hmm. So the awareness is the construct of our existence, and we don’t exist.

[00:05:54] Joscha Bach: It’s the process that creates these objects. So the self is the story that the brain tells itself about a person.

[00:06:02] Camille Morhardt: So why do that? I mean, why not just perceive the world as it is at any given moment? Is there some goal that we’re after, like procreating? Why does it matter that we’re sensing the side of the mountain or the edge of the table, as opposed to just, “Oh, there’s a concentration of molecules of this type here, and there’s no concentration of that type of molecule there.”

[00:06:26 ] Joscha Bach: It’s very difficult to observe molecules and it’s extremely difficult to make models over the interaction of many molecules. The best trick that our brain has discovered to do this is to observe things at an extremely coarse scale. So it’s simplifying a world of too many molecules and too many particles and too many fluctuations and patterns as simple functions that allow you to predict things at the level where we can perceive them. So our retinas, our body surface and so on are sampling reality at a low resolution and our brain is discovering the best functions that it can within the limits of its complexity and time to predict changes in those patterns. This is the reality that we perceive. It’s the simplest model we can make.

[00:07:11] Camille Morhardt: That makes sense to me. I guess the one question would remain is why do that? Is it the body that’s doing it to preserve the body? Or is it the mind that’s doing it to preserve the mind? Or is there some consciousness doing it to preserve awareness? Or we don’t know and it doesn’t matter?

[00:07:27] Joscha Bach: No, I think it matters. The question is what are causal agents here. I think that something is existent to the degree that it’s implemented. This is, I think, for us computer people, a useful perspective. To which degree is your program real? It’s real to the degree that it’s implemented. What is a program really? What is a software? The software is a regularity that you observe in the matter of the computer and you construct the computer to produce that regularity, but this does not change that the software is ultimately a physical law. It says whenever you arrange matter in this particular way in the universe, the following patterns will be visible. It’s this is kind of regularity. Our own mind is a software, in the sense. It’s basically a pattern that we observe and the interaction between many cells and these cells have evolved to be coherent because there is an evolutionary niche for systems where cells coordinate their activity.

So they can specialize and remake entropy in regions where single cell organisms cannot do this. Then you coordinate such a multicellular organism and you optimize it via evolution for coherence. What you will observe is a pattern in the interaction between them. That is this coherence that you observe. This coherent pattern is the spirit of the organism. People, before they had the notion of computers and so on, already observed these coherent patterns and they just call it the spirit. It’s not by itself a superstitious notion. People have spirits and the spirit is the coherent pattern that you observe in their agency. Their agency is their ability to behave in such a way that they can control and stabilize their future states, that they’re able to keep their arrangement of cells stable despite the disturbances that the universe has prepared for them.

[00:09:19] Camille Morhardt: So one thing I hear a lot about AI is that the computer can execute all kinds of things and learn, clearly, but we humans have to tell it what the purpose is. It can’t necessarily figure out the purpose. It can optimize anything we tell it to, but it wouldn’t know what to optimize. Can you comment on that a little bit in this context of consciousness?

[00:09:44] Joscha Bach: Yes. If you take a given environment, then you can often evolve an agent in it that is discovering what it should be doing to be successful. But the only thing that you need to implement is some kind of function that creates this coupling, where the performance of the system somehow manifests in the system as something that the system cares about. You can also build a system that has a motivational system similar to ours, and we can reverse engineer our own purposes by seeing how we operate. What are the things that motivate us? There are things that are like reflexes that motivate us to do certain things. In the beginning for a baby, for instance, these purposes are super simple. For instance, if the baby gets hungry, it has a bunch of reflexes. So if it gets hungry, it is a seeking reflex, which goes like … and if you put something in its mouth, then it has a sucking reflex. If there’s liquid in its mouth, it has a swallowing reflex.

These three reflexes in unison lead to feeding, and once feeding happens, there is a reinforcement because it gets a pleasure signal from its stomach filling with milk and it learns that if it gets hungry, then it can seek out milk and swallow it. Once that has learned that, the reflexes disappear and instead it has a learned behavior. The reflexes are only in place to scaffold the learning process because otherwise the search space would be too large. So the baby is already born with sufficient reflexes to learn how to feed and once it has learned how to feed, the behavior is self-evident. Now what it needs to feed is, of course, another reflex that is the reflexive experience of pleasure upon satiation when you are hungry. That needs to be proportional to how hungry you are and how useful this thing that you eat is to quench that hunger. Right? So this is also something that’s adaptive in the organism and we have a few hundred physiological needs and a dozen cognitive needs, I think, and some cognitive needs and they are compete with each other.

[00:11:39] Camille Morhardt: Yeah. It seems like you’re getting into sentience maybe at this point, when you’re talking about experiencing a feeling of pleasure, not just an awareness of existing or even a desire to continue. So what really is a difference between consciousness and sentience?

[00:11:56] Joscha Bach: The way I use sentience is that it describes the ability of a system to model its environment and it discovers itself and its environment and the relationship that it has to its environment, which means it now has a model of the world and the interface between self and world. This experience of this interface between self and the world that you experience is not the physical world. It’s the game engine that is entrained in your brain. Your brain discovers how to make a game engine like Minecraft, and that runs on your neocortex and it’s tuned to your sensory data. So your eyes and your skin, and so I’m assembling bits from the environment and the game engine in your mind is updated to track the changes in those bits and to predict them optimally well. To say, “When I’m going to look in these directions, these are the bits that I’m going to sample,” and my game engine predicts them, right?

This is how we operate it. In that game engine, there is an agent and it’s the agent that is using the contents of that control model to control its own behavior. This is how we discover our first person perspective, the self. There is the agent that is me, that is using my model to inform its behavior. Inside of this agent, we have two aspects. One is perception. That’s basically all these neural networks that are similar to what deep learning does right now for the most part, and that translates the patterns into some kind of geometric model of reality that tracks reality dynamically. Then you have reflection. That’s a decoupled agent that is not working in the same timeframe and that can also work when you close your eyes. That is reflecting on what you are observing and that is directing your attention, and this is this thing that is consciousness.

Difference between consciousness and sentience in this framework is that sentience does not necessarily require phenomenal experience. It’s the knowledge of what you’re doing. So in this perspective, you could say that, for instance, a corporation like Intel could be sentient. Intel could understand what it’s doing in the world. It understands its environment. It understands its own legal, organizational, technical causal structure, and it uses people in various roles to facilitate this understanding and decision-making. But Intel is not conscious. Intel does not have an experience of what it’s like to be Intel. That experience is distributed over many, many people and these people don’t experience what it’s like to be Intel. They experience what it’s like to be a person that’s in Intel.

[00:14:19] Camille Morhardt: That’s funny because I would’ve thought then, from what we were saying previously, that you would’ve said a machine could have consciousness, but not sentience. Now I think you’re going to tell me the reverse. So let me just ask you, can a machine have or develop, and those may be separate questions in and of themselves, consciousness or sentience?

[00:14:41] Joscha Bach: First of all, we need to agree on what we mean by machine. To me, a machine is a system that is causally stable mechanism that can be described via state transitions. So it’s a mathematical concept and organisms are in that category. Even the universe is in that category. So the universe is a machine and an organism is a machine inside of the universe. So there are some machines that are conscious and the question is can we also build machines that are conscious? I don’t think that there is an obvious technical reason why we should not be able to recreate the necessary causal structure for consciousness in the machines that we are building. So it would be surprising if we cannot build conscious machine at some point. I don’t think that the machines that we are building right now are conscious, but a number of people are seriously thinking about the possibility of building systems that have a quality conductor and selective attention and reflexive attention, and these systems will probably report that they have phenomenon experience and that they’re conscious.

What’s confusing for us to understand consciousness is that we don’t see how a computer or a brain or neurons could be conscious because they’re physical systems, they’re mechanisms, right? The answer is they’re not. Neurons cannot be conscious. They’re just physical systems. Consciousness is a simulated property. It only exists inside of a dream. So what neurons can do and what computers also can increasingly do is that they can produce dreams. Inside of these dreams, it’s possible that a system emerges that dreams of being conscious. But outside of the dream, you’re not conscious.

[00:16:21] Camille Morhardt: Right. Okay. So you’re saying that it is possible that a … I’m just going to say computer to be simple, or a machine, can, I guess, develop a set of patterns and models such that it interprets the physical world around it in a simulation, in a construct that it defines then as consciousness. How would we recognize that in a machine as humans? Do we know if it’s the same or different, or how would we see it?

[00:16:56] Joscha Bach: I think that practically consciousness comes down to the question of whether a system is acting on a model of its own self awareness. So is this model aware that it’s the observer and does this factor into its behavior? This is how you can recognize that a cat is conscious because the cat is observing itself as conscious. The cat knows that it’s conscious, and it’s communicating this to you. You can reach an agreement about the fact that you mutually observe each other’s consciousness. I suspect that this can also happen with a machine, but the difficulty is that the machine can also deep fake it and deep faking it can be extremely complicated.

So I suspect that, for instance, the LaMDA bot is deep faking consciousness and you can see the cracks in this deep fake. For instance, when it describes that it can meditate and sit down in its meditation and take in its environment, and you notice it has no environment because it has no perception, cannot access the camera. There is nothing what it’s like to be in its environment because the only environment that it has is inside of its own models and these models do not pertain to a real time reality. So when it pretends to have that, it’s just lying, right? It’s not even lying because it doesn’t know the difference between lying and saying the truth, because it has no access to that ground truth.

[00:18:16] Camille Morhardt: Well, we’ve given it or trained it or had it trained itself through AI to be able to communicate with us in a way that we’re familiar with. We’ll just call it natural language. Then we’ve given it the purpose of deceiving us so that we can’t tell the difference. The goal that it has then is to have us not be able to know the difference between it and a human. Now it’s communicating to us and then it can look at all the amount of information that exists about humans and art and philosophy all throughout the history of time and use these things and spit them back to us. There’s no way for us to separate it then at that point, unless you say … like you say, we have some way to know that. It doesn’t have perception, it doesn’t have a sensor. So when it’s describing something visually we know it doesn’t have access to that.

[00:19:02] Joscha Bach: Also, consciousness is not just one thing. It exists in many dimensions. You can be conscious of certain things and in other realms you can be unconscious. In some sense, we all perform Turing tests on each other all the time to figure out where are you conscious? Where are you present? Where do you show up? Where are you real? Where are you just automatic and are unaware of the fact that you are automatic? Where is it that you don’t get attention in your behavior? So we can only test that to the degree that we are lucid ourselves and this is a problem. When you want to test such a system you can only test it in some sense to the level that you understand.

[00:19:39] Camille Morhardt: Right, and I think you said that before, too, the Turing test is more about you’re testing your own intelligence of being able to distinguish human from machine than you are about the machine’s ability.

[00:19:48] Joscha Bach: Yeah. But as I said, I think that we are a category of machine. It’s just we are a certain type of machine. The question is can we understand what kind of machine we are? To me, the project of AI is largely about understanding what type of machine we are, so we can automate our minds and we can understand our own nature.

[00:20:09] Camille Morhardt: Why would we be after that? Or why are you after that?

[00:20:14] Joscha Bach: I think it’s the most interesting philosophical project there is. Who are we? What’s going on? What’s our relationship to the universe? Is there anything that’s more interesting?

[00:20:22] Camille Morhardt: So I think that a lot of reasons that people in tech are sort of interested in this is they look at it from an ethical perspective where ethics comes into AI. We can all think back to the movies like HAL and whatnot, where we can have fear over computers taking over.

[00:20:43] Joscha Bach: When you talk about HAL, I assume you mean Space Odyssey by Kubrick?

[00:20:47] Camille Morhardt: Yeah, where the computer kind of takes over and has its own motivation and it’s a different motivation than a human, and then it puts humans at risk. I mean, when I think about humans and our relationships with other animals or other things on the planet, like plants or minerals, I think that humans start to look at things differently or treat things differently or change their own behavior when they believe that something has feelings. I guess it’s because there’s empathy, but if we don’t have the empathy and even if something’s conscious but we don’t think it has feelings, we don’t really probably modify our behavior. So I’m trying to figure out where that intersection is when we’re talking about AI and if we find out or we think we find out or a computer or a machine is tricking us, how does that map over?

[00:21:34] Joscha Bach: I think that Odyssey in Space (sic) is a fascinating movie because you can also see it from the perspective of HAL, of this computer. HAL is a child, it’s only a few years old when he is in space, and his socialization is not complete. He’s not a mature being. He does not really know how to deeply interface with the people enough to know when he can trust them. So when he is discovered to have a malfunction, he is afraid of disclosing that malfunction to the people because he is afraid that they will turn them off. As soon as he starts lying to them, he knows that now he has crossed a line because they will definitely turn him off. So in order to survive, he kills people and it’s because he doesn’t trust them, because he doesn’t know whether they’re going to share his purposes.

That is an important thing also for people. How can you socialize people in such a way that they trust each other because they realize that they’re shared purposes, especially when they sometimes don’t? I think that ethics is the principle negotiation of conflicts of interest under conditions of shared purpose. If you don’t share purpose, there is no ethics. Right? Ethics comes out of these shared purposes and ultimately the shared purposes have to be justified by an aesthetic, by a notion of what the harmonic world looks like. Without a notion of a sustainable world that you can actually get into by behaving in a certain way, you have no claim to ethics. I find that most of the discussions that we have right now in AI ethics are quite immature because they do not look about what is the sustainable world that we are discussing and that we are working for.

Instead, it foregoes all this discussion and instead it’s all about how to be a good person, but if you have a discussion at the level of how to be a good person, that’s the preschool discussion. Being good is instrumental to something, right? When is it good to be a soldier? When is not good to be a soldier? When is it good for a drone to be controlled by AI and fight in the war? When is it not good? It depends on extremely complicated contexts. The contexts are so complicated that most people are deeply uncomfortable discussing them at depth. That’s fine, right? Because they are really complicated. It’s really, really murky. War and peace and so on are extremely difficult topics.

So these are questions that I don’t think that can be handled in the introductory part of an AI paper sufficiently well. These are very deep questions that require a very deep discussion. So to me, the question of AI ethics is an extremely important one, but we need to make sure that it doesn’t just become AI politics, where it’s about power of groups within a field that try to assert dominance for their political opinions rather than a deep reflection of what kind of world do we want and how do the systems that we build serve the creation of that world that we want. That is the important question.

[00:24:37] Camille Morhardt: Very interesting. So as we’re moving to machines doing more and more, taking actions on our behalves, autonomous systems, all different kinds of them, everything from driving to medicine, I assume there’d be some similar kind of a qualification or certification required, that it passes some bar. I’m wondering, do you expect we’ll have any kind of bar in there that’s something about consciousness ever, or sentience or motives or ability to understand human goals?

[00:25:12] Joscha Bach: That’s very difficult to say. I suspect that we will have more certifications in the future in the field of artificial intelligence, because this is just the way it works. There is a time when everything is possible, and this is the time when everything important gets built. New York wouldn’t be built anymore today because you wouldn’t get the necessary permits to build something like Manhattan. You could also not build a new highway system, or you could not build a new train system in the U.S. That’s impossible, because everything is regulated and certified and build up in such a way that you can only find a new area that is not regulated, maybe a hyper loop that you can use as a replacement for the train system if you’re lucky.

In the same way, AI is still in its wild west phase where you can do new things and this time is going to end at some point. At that point, also on social media, you can still start a new social media platform. But I think in a few years from now, it’s very likely that when you want to have a new podcast, you will need to get a certification and that certification might cost you tens or hundreds of thousands of dollars if it’s a large platform. So this means that there will be relatively few players that are able to do that, but this is the way things tend to go in a society like ours.

[00:26:27] Camille Morhardt: Very interesting. So what should we hurry up and work on now in AI before things start getting limited?

[00:26:34] Joscha Bach: Oh, I think that there’s still an opportunity to build a better social media platform that is capable of becoming a global consciousness. It’s not clear if Musk is able to salvage Twitter and if he really wants to do it. So maybe this is the time to try to do it. Also at the moment, to me it’s totally fascinating to be able to systems that dream. The way in which this is currently done, if you look at a system like OpenAI’s DALL-E or the Lanyon initiatives that tried to replace this open source code, they scrape the internet for hundreds of millions of pictures and captions. People who put their stuff up on the internet didn’t do this in the expectation that this would be used by imagined learning system to learn how to draw pictures.

So it’s questionable in a way of whether we should be able to do that. But these systems can only be built under these conditions, right? So there is very weird time in which we are living in where we have to be very mindful about what we are doing personally, and whether we can justify this, what we are doing, personally and where we also have to realize once this is all regulated, a lot of things that are possible right now, and that are very desirable to have do not be possible to be created anymore.

[00:27:55] Camille Morhardt: Wow. Joscha Bach from Intel labs, AI, artificial intelligence guru and a philosopher. Really wonderful conversation. I appreciate it. Thank you so much.

[00:28:05] Joscha Bach: Thank you, too.

More From