Skip to content
InTechnology Podcast

What That Means with Camille: How Robots Learn (117)

 

In this episode of Cyber Security Inside What That Means, Camille sits down with Yulia Sandamirskaya. The conversation covers how robots learn and where we might see robots in the future – including in our homes! They talk about some of the concerns, things to look forward to, and what we can expect with robotics.

To find the transcription of this podcast, scroll to the bottom of the page.

To find more episodes of Cyber Security Inside, visit our homepage. To read more about cybersecurity topics, visit our blog

 

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Follow our hosts Tom Garrison @tommgarrison and Camille @morhardt.

Learn more about Intel Cybersecurity and the Intel Compute Life Cycle (CLA).

 

How Humanlike Should a Robot Be?

There are so many different forms of robots, all the way from vacuum cleaners that can move themselves around to robots in Japan that look strikingly like humans. An interesting question is how close we want them to be to humans, since humans aren’t always the best or most efficient at doing tasks.

Yulia thinks a robot should be humanlike enough to be efficient in the environment we have created for humans, but that it should be better designed for the tasks it is accomplishing. Perhaps it needs wheels, or flight ability, or even just an arm that rotates differently than a human’s arm. 

 

Should We Be Afraid of How Robots Learn?

Robots are already in our homes, and will continue to be added to our homes. So should we be afraid that they continue to learn in their environments? It might actually be that we should look forward to it more than anything. Continual learning is tough, because if a robot is always learning, it is difficult to make sure that it will continue to do the correct thing.

This is why those who work on these algorithms make sure that the system is safe and controlled. In a fixed environment this is easy, since you can control everything about the environment. In a situation with continual learning, perception is key. The robot needs to be able to react appropriately to its environment, and that is difficult. This is why we use machine learning to train the systems.

It is a complex set of algorithms. But it is also not here yet. So as we begin to plan, taking these things into account is important. 

 

Privacy, Security, and Robot Learning

How do we stay secure when robots are in our home and doing tasks? A lot of this depends on what we are keeping local, in the machine, and what we are sending to process data or access other networks.

If we think about Google Home or Alexa, we already have devices in our homes that send information out. We have already accepted some level of information and data sharing. It will be interesting to see how humans progress and what level of security is necessary and wanted.

 

Yulia Sandamirskaya, An Expert on How Robots Learn

Yulia Sandamirskaya is an Applications Research Lead at Neuromorphic Computing Lab. She is an expert on all things robotics and neuromorphic computing. Make sure to check out Yulia’s paper that won Best Paper Award at the International Conference on Neuromorphic Systems 2022!

See Yulia’s website

Visit Yulia’s LinkedIn page

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

[00:00:36] Camille Morhardt: Hi, and welcome to today’s episode on Cyber Security Inside, What That Means: how robots learn. We are going to talk with Yulia Sandamirskaya about neuromorphic computing and how robots continuously learn once they’re outside of a contained environment and having to learn new things all the time. At the very end of the episode, I am going to ask her specifically about some of the work that she’s done on mobile robotics applications on the Intel Loihi chip. So if you’re interested in hearing a little bit more detail about that and spiking neural networks, it’s a little more technical, it’s at the very end of the episode. In the meantime, let’s talk about robots in our homes and what this means for us. Welcome, Yulia.

[00:01:23] Yulia Sandamirskaya: Thank you. Great to be here.

[00:01:24] Camille Morhardt: So we’re excited to do… or I’m very excited to do this show today because Tesla Optimus is coming out, so really wanted to get some insight from you. 

[00:01:34] Yulia Sandamirskaya: So when we to talk about robots, it’s important to distinguish what exactly we mean by a robot.  For instance in Japan for many years we see these amazing robots and they have this amazing mimic.  We have dishwasher, we have laundry washing machine, and we have vacuum cleaner robots that look very differently than humans doing these tasks.  And I’m really curious what we will see as an Optimus because the first thing that we see is, of course, the hardware, just the mechanics of the robot, which can be done in a really nice and ingenious design way.

Now, we want these robots to move somewhat autonomously, and this is a different level. So, now, we are talking about software and algorithms that control the robots, and these algorithms can stay on the question, how to move a mechanical system, complex mechanical system around, and this can be done today really nicely. So the movements of a robot can be very smooth, can look very humanlike, but then the next level comes in, whether the robot can decide, know where to move to. Can it move towards a particular object, and grab some object, and give it to me? This brings such things as vision into play, which is another algorithm which has a certain level of complexity. Today, there are not too many robots that can flexibly and easily use vision algorithms in their daily life.

[00:02:58] Camille Morhardt: I always wondered why all these robots look like humans because I don’t know that we’re the most efficient at doing a lot of the tasks that we work on. Everything in our home is already designed that we can do it. So putting something in the same shape, and then having it use the same tools that we already use, I don’t know.

[00:03:16] Yulia Sandamirskaya: Yeah. I think there should be some balance. We should find some great level of obstruction. So it should be humanlike enough to be efficient in this environment. So maybe something like six or seven degrees of freedom arm that is built like human arm is a useful thing. We want it to move around. Whether we want it to walk on legs or to roll on wheels, it might be more efficient if it’s just on wheels. So we might make some compromises, and it’s similar like this comparison between the bird flight and the airplane flight. So people have found some great features that we want to help you with the shape of the wheel.

[00:03:44] Camille Morhardt: Right.

[00:03:45] Yulia Sandamirskaya: So we can copy from this structure, the human body, whatever is really useful for the task and practical.

[00:03:55] Camille Morhardt: What about this crossover? Like you say, our bodies was… I mean, it seems to me we’re definitely headed in the direction of biology crossing over with the mechanical. So do you think that that will combine somehow rather? I mean, I know the term “transhuman” is out there a lot, but I don’t necessarily mean that. Just this merging of the biological and digital or the computer, biological and mechanical.

[00:04:21] Yulia Sandamirskaya: In the long-term, certainly, right? So I cannot imagine myself if I didn’t know if I’m with my phone or my computer, right? So a lot of my memory is offloaded to these devices, so they’re already part of my system, and we make things part of our system very quickly. There are all these experiments, like if you give someone a tool and you work with this tool, neurons in your brain extend the representation of your body to this tool within minutes. So, certainly. So if I would have a right arm that extends on the table in front of me and follows my commands, then very quickly, I will just see it as part of something that I can control. If you look at prosthetics today, for instance, I think it’s quite amazing what has been achieved, right, with prosthetic devices.

[00:05:04] Camille Morhardt: Wirelessly, and I guess I’ve seen demonstrations of prosthetic arms that are wireless. So even if the arm is not attached to the body, a person is moving it, and the arm can be moving somewhere else, so. I mean, that’s already out there, but…

[00:05:18 ] Yulia Sandamirskaya: Right. So to help people, right, who need assistance like that. Just now, augmenting people with a third arm. I can also imagine like in some construction works or something like that, and that’s normal. We do it all the time, right? We extend our representation of our body to all the tools that we use.

[00:05:34] Camille Morhardt: Oh, okay. That’s fair, but what about the brain because neuromorphic computing is definitely attempting to be structured very similar to the human brain? So why mimic that in compute? Why not do that completely differently?

[00:05:50] Yulia Sandamirskaya: So it’s quite fascinating what the brains can achieve, and not only human brain, but also brains of animals and insects, like a bee with a million of neurons can build some representation of the environment, can then go find food, come back, communicate that to its fellow bees, can navigate land efficiently with very compact, very energy-efficient computing system. I find it just fascinating and very inspiring, and I think if it could build computing systems like that, that could be very advanced, and useful, and efficient technology. Then, this is one and only example that we know works of a system that can flexibly, adaptively learn and act in our natural environment.

[00:06:34] Camille Morhardt: Our brains, you mean?

[00:06:35] Yulia Sandamirskaya: Our brains or the neural systems? We don’t have another example yet.

[00:06:41] Camille Morhardt: When we have robots in our homes, robots continue to learn. Should we be afraid of that?

[00:06:48] Yulia Sandamirskaya: I think we should look forward to that, and hopefully, because of that, we’ll get robotic assistance that can actually be helpful. Now, when learning comes into play, that’s another level of complexity. Here, we have to distinguish offline learning when we maybe train part of the algorithm that controls the robot offline with a lot of data just because of the complexity of the task that we train it with examples of when this task is solved, and then we’ll let this algorithm control the robot. What we try to achieve and explore in our work is continual learning, and this is difficult because, of course, if the system can learn continually, how can we guarantee that it does the correct thing? Usually, when people work on continual learning algorithms, they make sure that the system stays safe and controlled.

[00:07:33] Camille Morhardt: I’m trying to understand the difference between continual learning and learning in a fixed environment. I think I understand to some degree, but obviously, when you send a robot out of a fixed location or a manufacturing setting, and now it’s walking around town with a human or in a home, but help us understand a little bit better the difference.

[00:07:54] Yulia Sandamirskaya: In a fixed environment of a factory, we can control where the objects that we want the robot to work with, where they’re located, how they look like. We know exactly which object is where, and we don’t really need much of the flexibility. It’s not even about learning, but just how flexible we want the robot to be. We want it to go into sequence of movements, or precisely, we want to be productive and just without stop. This is why in the classical factory, robots are usually put into a cage so that you make sure that no human is in the way of this robot because those robots, very often, they have minimal perception, if at all. They just execute a sequence of movements like a sequence of program steps. The moment when we want to bring robots in unstructured environment shared with many humans who just run around and can appear in the workspace of the robot unexpectedly, then we need to make sure that the robot can react to its environment. So we need good perception, and this is one step away from the good old robotics that we know from the factory floor.

Now, perception happens to be really complicated. It’s really amazing how we are able to visually perceive our world and understand what is where. The complexities mounts. There are so many different objects. There are so many different lighting conditions. We have this 3D perspective, so the same object can look very differently in the real world. So when people try to just break down the program and algorithm that will allow robot to recognize things in an environment, it didn’t really scale well. It was difficult to scale it to all possible objects that a robot can encounter in nature environments, so machine learning came to help. With machine learning, you can give the system a lot of examples of all the objects, and you don’t have to think, “Which features shall I use to recognize and distinguish one object from the other one?” I will just give many examples with labels, “This is this object. This is that,” and then I will train the system, and this works fairly well.

Today, in the state of that, this is what people use, and this is this offline trained model of the world, but now this model might need to be changed. Now, new objects come into play. I might not have thought about old possible situations that the robot will encounter, or simply, I want my system to be compact and efficient, and maybe even run on the robot so that they don’t have to send the data, like the video data to the cloud and back, which means my neural network, my machine learning system needs to be small and compact. If it’s small and compact, it cannot represent every possible situation in this world, and then I might want to be able to teach this robot a couple objects that it needs to know there from my household. So that’s how learning comes into play.

[00:10:30] Camille Morhardt: Okay. I have so many questions. So let’s say you want the robot to vacuum. So you show it the vacuum cleaner, you show it the vacuum cleaner in all different lights from all different angles. It’s using computer vision to perceive what this thing is so that it can understand that’s the object. Then, it has to learn to plug it into the wall. It has to learn the length of the cord. It has to learn the edge of the carpet gets sucked up in vacuum. That’s not good. We have to do it differently. How is it learning all that as it encounters different things that we forget to teach it? Right? You forgot to mention the carpet gets sucked into it. 

[00:11:07] Yulia Sandamirskaya: Yeah.

[00:11:08] Camille Morhardt: How does it adapt to that kind of thing? Also, what is it really seeing or perceiving? Like if it’s vacuuming the floor, does it see the dust? Does it see anything else in its environment? Does it know you have Windex and soap piled up next to the vacuum, or is it only seeing the vacuum and only seeing the floor and not the dust? Do we know what it’s seeing?

[00:11:31] Yulia Sandamirskaya: So, first, one thing to note is that we’re talking about the future, so we don’t have such robots today. We’ll see. Maybe we’ll have one next week. But when we think about these robots and really think about learning here, then we probably have to distinguish different types of learning. This robot will need to learn different objects. We’ll need to be able to recognize and localize them in environment, and this is one type of learning. It’s like for us, object learning is different from skill learning, for instance. Skill learning are different behaviors. For instance, if it needs to learn how to plug a plug into the wall, it’s a skill that is learned with different methods.

Now, the skills can be continuous, like the behavior when I plug the plug into the wall. It’s a continuous behavior. I might have a sequence of discreet behaviors, like if I have to clean up the table, and then add a certain sequence, how I take glasses and plates. I put them in dishwasher. I close it. I let it start. In here, the sequence of behaviors could also be learned with reinforcement planning. Usually, it could take too long, so you don’t want to do it. It could learn by imitation so that the human shows the robot once, “This is how you can do it,” and then the robot just parses the sequence of actions, and then takes it at the basis, and then maybe does a little reinforcement learning on top of that to make sure that it got it right or that each individual behavior in robot execution can match the design.

Then, on top of that comes, for instance, reasoning. So if the robot can recognize and localize objects in the environment, it might be able to then build some model reasoning. For instance, there’s a cup that is on the table. That would be spatial reasoning, and then close to this cup, there are some other objects. So if the human tells me, “Where is my key in the room?” I see maybe two keys, I can ask, “Do you want the one that’s on the table or the one that’s on the shelf?” Then, language capabilities come on top of that. Those also could maybe be learned. So the whole system is quite complex. I don’t think that there’s one learning algorithm that will allow the robot to learn it all. Yeah, that’s the vision in practice. I’m pretty sure we will face many challenges when we start building systems like that.

[00:13:36] Camille Morhardt: As we have robots out continuously learning amongst ourselves, not in cages, as you said, in factories, how do we set them up to protect humans?

[00:13:50] Yulia Sandamirskaya: So I would see each robot is a particular tool. I think in the beginning… So it’s great to see Optimus, and we’ll see how versatile it will be. So my vision is that the robots will be there for a particular task, and they will start with a very simple task. It will be very clear that this is what this arm is doing. It’s going to an object to position A or to object A and brings this to the place B, and it won’t touch you on the way. So if you are in its way, it’ll try to plant a tool so it doesn’t touch you, and that’s it. That’s all it does. We can make sure that if a child happens to put its finger somewhere between the joints of the robot, the robot stops on time, and then no injuries happening.

[00:14:32] Camille Morhardt: It seems like the amount of perception that it would have to do is so great. I know we’re doing it in cars to a degree today through computer vision and even predicting the way that a person is moving maybe 30 feet to the side of the road might imply they’re going to be in front of the car in some amount of time. I know that some of that can be pretty sophisticated, but when you’re talking about a robot in the house, how can you make sure that it’s safe other than just it can’t go too fast and it’s soft on the outside? If it’s got a task, isn’t it just going to do that task?

[00:15:06] Yulia Sandamirskaya: The fact that it will be in the home is actually good because you have this closed environment. Many things in this environment is stationary, so you make once a good three-dimensional map of the environment, where what is, then you only keep track of updates of changes, and you just need sensors that tell you when something is moving. When it gets closer to the robot, you need slightly better sensing to make sure you notice that. In the worst case, if you touch something, then you can have sensing in the joints themselves. So if you know you touched something that you haven’t expected to touch, it will stop. So, today, the robots are certified to work around humans if they can do that reliably with a couple redundant mechanisms that would just make the robots stop if you touched something unexpectedly.

[00:15:52] Camille Morhardt: What about privacy? How can we feel like we have our privacy when there’s a machine that can actually learn living among us?

[00:16:00] Yulia Sandamirskaya: Yeah. So I think one answer and ambition would be try to make this all on board of the computer, make the processing, day-to-day processing on board of the robot, not sending that to the cloud. I can imagine many people would be uncomfortable if images from the camera would be sent to the cloud. The sound is maybe less critical, but the images is really critical. So I think all these processing that is about safe movement around, the processing should be done on board of the robot, and this is where, again, neuromorphic computing or some other efficient computing comes into play because we don’t want to send all that information to the cloud where you can have this large model.

However, we also might want our overall robotic software to learn from all those examples of what the robot experience is in a particular home, and there are some concepts in federated learning, for instance, when now there is some learning happening on board of the robot and only result of this learning, only updated model is sent for central processing and merging, and then the result of merging is sent back to the robot instead of raw data. So the raw data stays local. So people are thinking of these issues there for serious issues for acceptability of these systems. On the other hand, we might get used that some things are sent out like with Alexa and all these other devices. We accepted some level of information sharing, data sharing.

[00:17:26] Camille Morhardt: I have to ask because I know that you also get together with Joscha Bach, and I had him on a podcast a little bit ago talking about machine consciousness. Do you believe that there’s a possibility for machines or robots to have consciousness over time or now?

[00:17:43] Yulia Sandamirskaya: So I wouldn’t be very comfortable claiming much here because the definition of consciousness itself is a little bit controversial, right? So we don’t have a crystal clear definition that everyone agrees upon. One thing to maybe keep in mind that our brains are also not something that just not only emerges from interaction with the environment, and there’s only learning and nothing else. There’s a lot of developmentally-defined structure, evolutionary-defined structure in our brains. So also the same in the robots. They will be as smart as we program them to be, and the learning will be part of that smartness, but also, we will define the learning algorithm. We will define the cost functions, and it all will be task-related. So all those algorithms, they are part artificially-designed machine for a particular task, for a particular goal. So I don’t see any place for anything like consciousness there. As long as we don’t really understand where exactly this phenomenon comes in biology, we won’t be able to replicate it in the machine. I’m personally not worried about too much consciousness in robots.

[00:18:59] Camille Morhardt: Do you worry about anything? I know you’re very enthusiastic about them.

[00:19:03] Yulia Sandamirskaya: I’m worried a little bit about dual use. So all the systems that can recognize things, recognize humans used for military purposes, and this is where it gets a little bit scary and uncomfortable. This can be difficult for this technology to move forward, right, because the public opinion might swing one way or another if it’s not 100%. So not a single accident is forgiven. Humans have hundreds and thousands of accidents per day, and the robot, it’s like one accident per year where you kill the business for the next year.

[00:19:34] Camille Morhardt: Mm-hmm.

[00:19:35] Yulia Sandamirskaya: The ethical issues, if some people will be able to afford this robot helpers and others not might lead to even more inequality. It could be problematic with jobs. If easy jobs will be replaced and automatized without making sure that people who are relying on these jobs are taken care of or get some additional education and can do something else, it might also be problematic.

[00:19:58] Camille Morhardt: What do you think the future of robotics is? I mean, I think about looking back over the last 15 to 20 years and how much has changed. If you could think forward 15 or 20 years, do you have any sense of how things might change?

[00:20:15] Yulia Sandamirskaya: I would hope that in 10, 15 years, we will have at least first autonomous mechanical devices that can assist humans. Maybe not in everyday home, but maybe in the hospital environment, in elderly care environments, maybe in manufacturing, or farming, or construction sites. So taking over some repetitive, or boring, or dangerous jobs from humans, assisting them. More general purpose assistant robot like a butler at home. That is probably still further away, but on the other hand, these changes sometimes happen very quickly without expecting it. If you expected things… The iPhone, right, and then where it is today from 2006.  So, we’ll see.

[00:20:59] Camille Morhardt: I didn’t want to leave the podcast today, Yulia, without asking you about an award that you won for best paper at the recent Icons Conference 2022, which is a neuromorphic computing conference. Can you elaborate on the difference between neuromorphic computing and spiking neural networks that you used on the Intel Loihi chip?

[00:21:21] Yulia Sandamirskaya: Mm-hmm. So if we think about computing architecture, which are the computing architectures stuff we deal with are fairly boring, right? It’s the same basic principle that goes all the way back to first computers. So you have a CPU, you have a memory, no different levels of memory, and then any computation requires us to go back and forth between the memory and the computing device. If we have some massively parallel system that we need to process, and the massively parallel could just be images that we get from the camera, these large matrices, and all pixels come at the same time, we want to process them at the same time.

Today, we do this with the neural networks, so we add even more parallels. Now, we have millions and millions of neurons. They all have to act at the same time, but then the conventional processor, they cannot be acted upon at the same time. They have to be process sequentially, and we can do that, but potentially, it consumes a lot of energy and can also take a lot of time. If we look into graphical processing units, they alleviate this problem a little bit because they were built to do computer graphics, so to create images on our screens, and they are built for processing these parallel areas of data images. So they can do it much faster than a CPU-based architecture.

Now, neuromorphic computing is another type of computing architecture where we also have massively parallel system, of course, whereas with Loihi, we have 128 cores on a single chip, and each core has local memory. Meaning, when I now have want to update my variables of large parallel system, I can do it very efficiently because the variables are stored close to the processor. So, now, I can update the state of these variables, my neurons. I can also update the connections between them, which are also stored locally. I can also update them efficiently. This allows me to learn on the fly.

Now, the spiking aspect is connected to event-based processing. So, these neurons, they don’t work in a clock-like fashion. In the conventional computing, in particular, image processing, you get the new image from the camera every 30 millisecond, and then when the new image is there, you do processing with the clock of the processor step by step. On a neuromorphic chip, typically, you wouldn’t have a clock. It’s asynchronous. Every neuron receives input when the input comes. It has some internal dynamics, so integrate this input, keeps the state that is driven by the input, and then when the state receives some threshold, it communicates with other neurons with discreet, typically binary events. So it sends an event to downstream neuron saying, “My variable reached the threshold. Now, you can act upon that.”

So this makes communication between neurons much sparser in time. So you don’t have to transmit information on every time stamp. You only transmit information once in a while, ideally, sparsely, so seldom, and you save a lot of energy by doing that. So with spiking neural networks, you’re in the realm of event-based asynchronous computing, which is faster and more energy efficient for many tasks. This is what neuromorphic chips typically exploit. In particular, our Intel’s research chip, Loihi.

[00:24:22] Camille Morhardt: I can see why that could be useful in robotics.

[00:24:25] Yulia Sandamirskaya: I believe that for robotic tasks, that’s a really nice match between the hardware and the tasks because, yeah, we work with neural-network-based algorithms, but we have a huge algorithmic space of the type of neural networks that we can efficiently run on this hardware. These are not only feet-forward networks that are good for image processing or convolution on neural networks. It can be all kind of topologies. We can have some topology that generates some oscillations that could control the robot. We can have some concurrent networks, but also implement controllers. We can have some graph-based search algorithms implemented in this hardware efficiently, have optimization algorithms implemented efficiently.

For a robotic task, we need all these different types of algorithms. It’s not just one feet-forward neural network that can solve all these different tasks, and they can run in real time and energy-efficient, saving energy efficiently, which is important for robots. So I think it’s a really nice match between neuromorphic computing and robotic tasks, which is no wonder because neuromorphic computing mimics the way how biological neural systems process information like pretty faithfully, better than other computing systems we have so far. Those biological neural systems, they evolve to control movement in real world environments. So it’s at least one solution that works nicely. There might be others, but we know that this one works.

[00:25:51] Camille Morhardt: Well, Yulia Sandamirskaya, thank you so much for joining us all the way from Zurich, an algorithm researcher within the Neuromorphic Computing Lab at Intel, which is located in Munich. Thank you very much.

[00:26:06] Yulia Sandamirskaya: Thank you so much, Camille. It was fun.

More From