Skip to content
InTechnology Podcast

#110 – What That Means with Camille: Autonomous AI Systems

In this episode of Cyber Security Inside What That Means, Camille chats with Mykel Kochenderfer, Professor of Aeronautics and Astronautics, Human Centered AI Institute about autonomous systems. The conversation covers:

  • What autonomous systems are, and some examples of them.
  • What it is that goes into making an autonomous system.
  • Why it is so difficult to develop an autonomous system and the factors we have to take into account when doing so.
  • What will help with the safe deployment of autonomous systems and why they are important.

And more. Don’t miss it!

 

To find more episodes of Cyber Security Inside, visit our homepage at https://intechnology.intel.com. To read more about cybersecurity topics, visit our blog at https://intechnology.intel.com/blog/

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

 

Learn more about Intel Cybersecurity:

https://www.intel.com/content/www/us/en/security/overview.html 

Intel Compute Life Cycle (CLA):

https://www.intel.com/content/www/us/en/security/compute-lifecycle-assurance.html
 

 

Here are some key takeaways:

  • What is an autonomous system? It is a system that takes in inputs from the world, makes a decision, and communicates an instruction, recommendation, or action.
  • There are several important pieces to building an autonomous system. For a car, the input systems are going to be different than what is required for another system. You might need LIDAR, cameras, radar, and more. What needs to be inferred and taken into account for a car might also be different from another system.
  • The decisions and actions your autonomous system takes will then affect the scenario it is in, which will then need to be perceived or sensed again and again, creating a loop of control. The frequency that something is taking in data entirely depends on the situation.
  • The thing that makes building these systems difficult is uncertainty. When designing a system that is preventing collisions in a car, taking into account that the other car is not autonomous and is therefore acting in an uncertain way that makes the decision making process difficult. 
  • Autonomous systems that are not fully penetrating an environment are some of the hardest to design. There will be human drivers of cars for a very long time, so autonomous cars have to be designed with that in mind. If every car was autonomous, the system would be much easier to design and roll out.
  • The parameters for what to take into account when designing autonomous systems are decided by the engineers and various other stakeholders for the given situation. Often this is the folks that will be using or selling the system, as well as those that will be regulating it. 
  • Who needs to be involved in the discussion? The regulators are very important, because they need to build up an intuition about how you are going to validate the system. The end users are also important, as you want to make sure that you are aligning your system with what they are expecting. For example, with aircraft, you would want to talk with both the pilots and the passengers for certain systems.
  • Something that needs to be taken into consideration that may not be obvious is the alert system in an autonomous system. If it is constantly going off, the user will no longer pay attention to it, so that balance of what triggers an alert is critical. 
  • What the user or the operator finds acceptable and comfortable may change over time. For example, in an autonomous car, what a normal deceleration speed looks like might change over time. In aircraft, the airspace and the types of aircraft in it will change over time. So the systems also need to be adaptable.
  • Why has the rollout for things like autonomous driving been so slow? It is just hard to design a robust system that will encounter these low probability events and behave as expected. This is especially true since the deployment will be very broad and over a long period of time.
  • Using human experts is very important for designing and testing these systems as well. They should also help specify the objective of the system, and to help identify the tradeoffs.
  • A less heard of area that these autonomous systems are being used is wildfire fighting. There is a lot of uncertainty in firefighting, and using drones to help monitor the fire and then using a system to help allocate resources to fight that fire is a way that autonomous systems can be used.
  • Another interesting area this technology is being used is in canes to help steer someone who is blind. They include a light sensor and camera to help move around obstacles and get to a destination.
  • Mykel notes two major things that will help with the safe deployment of autonomous systems. The first is trustworthy modeling and simulation, since you simply can’t test everything. The second is to take baby steps with development and deployment. This will help build up data and failure models to make the systems better.

 

Some interesting quotes from today’s episode:

“An autonomous system is just a system that takes inputs from the real world as perceived through some sensor systems and it makes decisions and it tells the actuation system what to do. So it takes as input observations of the world and outputs actions or decisions.” – Mykel Kochenderfer

“Inside the decision-making system, it needs to handle all of the different kinds of scenarios that it might encounter. Like maybe a pedestrian walking into the road, or another aircraft crossing into your flight path. And then those decisions then get translated into some kind of actuation.” – Mykel Kochenderfer

“The decisions that you make are going to affect the world. And then that’s going to affect what you’re going to be perceiving at the next time step. And so this is known as the control loop.” – Mykel Kochenderfer

“On the application for aircraft collision avoidance systems, this is typically done at 1 Hertz or one decision per second. In other situations like for autonomous cars, it may be 10 milliseconds that it needs to make a new decision. If you’re trying to land a rocket, like a SpaceX rocket, that may have to be even faster.” – Mykel Kochenderfer

“You can’t really account for absolutely anything that can happen. So, for an example, we wouldn’t be able to drive on a two-lane road because it’s possible that the oncoming car might just swerve into us at the last moment. And there is absolutely nothing we can do about it. So we have to decide what’s within scope for our particular system and what’s outside of scope.” – Mykel Kochenderfer

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

[00:00:36] Camille Morhardt: Hi, and welcome to today’s episode of What That Means with Camille. We are going to talk about safety of autonomous systems. So this is all about artificial intelligence and how do we keep autonomous driving and aviation safe. 

I have with me today Mykel Kochenderfer, who is professor at Stanford University. He’s professor of aeronautics and astronautics and computer science, all three. And he’s director of the intelligence systems lab at Stanford, which is part of their Institute for Human Centered AI or artificial intelligence. He is also co-director of Stanford’s Safety for AI center. So welcome to the show, Mykel.

[00:01:21] Mykel Kochenderfer: Thanks for the invitation.

[00:01:23] Camille Morhardt: Yeah, it’s really exciting to have you here. I’m interested… Well, I want to start by, can you just give everybody kind of a quick definition of what is an autonomous system?

[00:01:34] Mykel Kochenderfer: Yeah. An autonomous system is just a system that takes inputs from the real world as perceived through some sensor systems and it makes decisions and it tells the actuation system what to do. So it takes as input observations of the world and outputs actions or decisions.

[00:01:57] Camille Morhardt: Or recommendations?

[00:01:58] Mykel Kochenderfer: Or recommendations. Right. There’s a whole spectrum of autonomous systems–some that have to be fully autonomous. So many cyber security agents will have to be fully autonomous because they have to make decisions faster than a human can. On the other hand, the AI may be used as a decision support system that provides recommendations to a human to actually execute.

[00:02:27] Camille Morhardt: So tell us how you go about building one of these autonomous systems.

[00:02:32] Mykel Kochenderfer: Right. So to build an autonomous system, you need to choose what kinds of sensor systems to use. So for an autonomous car, you may need to make decisions about do you use LIDAR and how do you configure your LIDAR sensors. Do you use camera sense systems? Do you use radar? You need to understand what your sensing modality is, as well as their error characteristics. Then you need to develop a perceptual system that will process those sensory inputs to arrive at a good understanding of what’s going on in the world.

So you need to be able to infer where might there be other vehicles, where might there be pedestrians and so forth. Then inside the decision-making system, it needs to handle all of the different kinds of scenarios that it might encounter, like maybe a pedestrian walking into the road or another aircraft crossing into your flight path. And then those decisions then get translated into some kind of actuation if it’s a physical system. So it may be control signals that go to the ailerons of an aircraft. Or for a car, it may be to speed up or apply the brakes. So those are the major components.

[00:03:57] Camille Morhardt: Okay. So basically you’re going to sense what’s happening. You’re going to perceive based on the sensors the same way essentially humans take in data.

[00:04:06] Mykel Kochenderfer: Yep.

[00:04:07] Camille Morhardt: I see something. Now, what do I think that means based on what I’m seeing? And then I’m going to take some action even if it’s just a recommendation to a pilot or a driver. Or I’m going to actually just apply the brakes immediately because I can apply them faster than a human can and otherwise we’re going to hit somebody.

[00:04:23] Mykel Kochenderfer: That’s right. So the decisions that you make are going to affect the world. And then that’s going to affect what you’re going to be perceiving at the next time step. And so this is known as the control loop.

[00:04:35] Camille Morhardt: Oh, that’s really interesting. So how frequently… You’re going to tell me it depends. How frequently are you taking in new sensor, new signals? Is that like a constant thing?

[00:04:49] Mykel Kochenderfer: Yeah, so it depends. Of course on the application for aircraft collision avoidance systems, this is typically done at 1 Hertz or one decision per second. In other situations like for autonomous cars, it may be 10 milliseconds. It needs to make a new decision. If you’re trying to land a rocket, like a SpaceX rocket, that may have to be even faster.

[00:05:18] Camille Morhardt: Oh, okay. That’s interesting. So stay high level for a minute here, too. What makes building these systems difficult?

[00:05:28] Mykel Kochenderfer: Yeah. So it’s uncertainty. Making decisions under uncertainty is extremely difficult. I’ll give you some examples. One application that we’ve been working on is wildfire fighting. That requires an understanding of the current state of the fire, right? Typically, fire chiefs only have an imperfect understanding. They can gain more understanding by using more sensors, doing over flights with helicopters or drones or whatever. In autonomous driving, you may have uncertainty about where there might be pedestrians. So there could be noise in the LIDAR sensors where there could be occlusions like another vehicle may be in front of us blocking our view of the pedestrian. We also have uncertainty in how the environment will evolve, right? We don’t know exactly whether the pilot will continue straight or turn left or turn right. We don’t know if the fire is going to propagate to the east at a particular rate. But we might just have a probability distribution over what might happen. And it’s important to take into account the full spectrum of possibilities here to produce robust decisions.

[00:06:49] Camille Morhardt: Does it change if you know that both systems are using the same autonomous… So I’m trying to ask a very simple question here. If your car that has an autonomous system knows that the car that’s coming at it and is now going to have a head on is not using a system, is that a factor in how it acts or does that just escalate the risk and uncertainty but it still behaves the same way?

[00:07:17] Mykel Kochenderfer: Right. So generally, you can do better if you know the behavior of the other agents. So if both vehicles are equipped with the same system, you can make better predictions about what might happen. Whereas if you have an autonomous vehicle encountering another human driven vehicle, the human might be drunk or distracted or whatever and so there might be a lot more uncertainty about where the vehicle will be over the next few seconds. They might suddenly swerve or whatever. Whereas if you knew that the other system followed the same rules that you are, you can potentially make much better decisions.

[00:07:59] Camille Morhardt: I mean, that seems logical to me. If you had told me the opposite, I would’ve been surprised.

[00:08:04] Mykel Kochenderfer: Yeah.

[00:08:05] Camille Morhardt: So now I’m just wondering what is that implication then for the rollout of these kinds of systems?

[00:08:11] Mykel Kochenderfer: It really depends. In some kinds of applications, you just won’t have perfect 100% penetration with your particular technology. So for autonomous driving, I think we just have to build our cars to be robust to other humans for the foreseeable future. There will be human drivers on the roads for a very, very long time.

[00:08:38] Camille Morhardt: How are you setting your parameters from corner case that like, “Okay, I really don’t think the car is going to be struck by lightning during this season where there’s no lightning in this geography or whatever, but we’re going to worry about it anyway just in case the one in 15 billion chance or something.” Or the way that people behave on the road or in the air. And why wouldn’t you account for absolutely everything? Is that just like performance of the system?

[00:09:06] Mykel Kochenderfer: Yeah. You can’t really account for absolutely anything that can happen. So for an example, we wouldn’t be able to drive on a two-lane road because it’s possible that the oncoming car might just swerve into us at the last moment. And there is absolutely nothing that we can do about it. And so we have to decide what’s within scope for our particular system and what’s outside of the scope.

[00:09:36] Camille Morhardt: And how do you decide that?

[00:09:39] Mykel Kochenderfer: The way we decide this is it requires discussions between the engineers and the various stakeholders. So the folks who will be using or selling the system as well as the regulators.

[00:09:56] Camille Morhardt:  So, yeah, and that just kind of brings to mind another question, which is, who really needs to be involved when you’re designing these kinds of autonomous systems or systems that are providing recommendations?

[00:10:08] Mykel Kochenderfer: I think having as broad array of stakeholders together in the same room as possible, that’s what you would want. You want to engage the regulators as early as possible so that they can build up an intuition about how you’re going about validating the system. You want to engage the end user as much as possible to ensure that there’s an alignment between what they’re expecting from the system and what the engineers have designed the system to optimize for. Aircraft system, you’d want to engage actually both the pilots and maybe also the passengers, depending on what the system is. So, for an example, you’d want to understand what is the comfort level for the passengers. You don’t want to create a system that pulls a half G or a full G on the passengers. So getting that balance right is very, very important and ties into some pretty key engineering trade offs. So if by engaging with passengers, you would have a sense of what kind of maneuvers are appropriate and within scope for your system. That’s just one limited example.

[00:11:33] Camille Morhardt: Right. Okay. So maybe, and I’m just making these up, but the system is concerned with preserving fuel where possible. I mean, maybe the primary consideration is safety, but then all things okay with on that front, it’s going to conserve fuel. But then that might mean like kind of a sharp nosedive. So then you have the passenger saying, “No, please, let’s use a little bit more fuel so that I can be comfortable.”

[00:12:01] Mykel Kochenderfer: Yeah. And also the alert rate is also pretty key. If you’re building a recommendation system or a hazard alerting system and it’s alerting all the time, then the operator will not pay attention to it. Also another example of engaging the end user on what’s acceptable, this comes up quite a bit in autonomous driving, right? You want to understand what’s a comfortable level of deceleration. Maybe that will depend upon whether it’s just like a normal maneuver or whether it’s safety critical. You may have different thresholds as to what deceleration rate is acceptable for the passengers of the vehicle.

What operators and the end user finds acceptable, that may change over time and so we want to be able to build systems that we can adjust. So for our work with aircraft collision avoidance systems, we know that the airspace will continue to evolve over time. The mixture of different types of aircraft and air traffic control procedures, that’s going to change over time, and we want to be able to update and maintain the system to ensure that it’s still safe and operationally acceptable–same thing for autonomous driving and many other domains that rely upon autonomy.

[00:13:32] Camille Morhardt:  How come it’s so slow? I mean, I think we all have heard predictions of, especially autonomous driving would’ve already happened by now. It’s like we’re all sort of waiting. We think it’s here right around the corner.

[00:13:47] Mykel Kochenderfer: Great question. So there are a number of different reasons. A lot of people underestimated the difficulty of both building a very robust system, as well as validating that it is robust and it will behave as expected when deployed in the real world. And so the reason for that is it’s just very difficult to anticipate all of the different edge cases that you’re going to experience in the world. So some of the early crashes involving Tesla autopilot and other systems, they encountered situations that would’ve been very, very difficult for a human designer to anticipate. Sometimes it’s referred to as a very long tail. If you think about the distribution over possible situations, there are a lot of low probability events that you’re still going to encounter if you have a broad deployment over an extended period of time.

[00:14:51] Camille Morhardt: Can you talk about incorporating, I’ll say, subject matter expert, a human, early on in the process of training AI or AI self-learning. I’ve been hearing lately that there’s a lot of benefit to incorporating human knowledge as opposed to just providing data and letting the model run. Can you talk about how you use that or incorporate that?

[00:15:13] Mykel Kochenderfer: Yeah. Humans are extremely important in a number of different aspects, but two that come immediately to mind, sanity checking our models, right? So for the development of this aircraft collision avoidance system, we needed to build a model of the airspace that captured the trajectories of aircraft as they come within close proximity to each other. To validate that, very early on we generated many, many synthetic encounters from our model and then compared it to real data and tried to have a human expert guess which one was synthetic and which was real. That was a major milestone when we were able to convince human experts that the model of the environment was at least in the right ballpark. We used a whole bunch of other quantitative metrics for assessing how representative the model is of data. Humans are also very important in specifying the objective of the system, right? So what is it that we want to achieve? What are the appropriate trade offs?

[00:17:00] Camille Morhardt: Right.

[00:17:01] Mykel Kochenderfer: So sometimes it feels a little bit strange to talk about a trade off between safety and operational performance, but you have to make that trade off in order to have a system that actually works and is acceptable when deployed in the real world, right? You wouldn’t want to build an autonomous car that went to a full stop as soon as it encountered another car, right? So getting that balance right is something that a panel of humans can help inform.

[00:17:07] Camille Morhardt: Okay. Or it refuses to leave the driveway.

[00:17:14] Mykel Kochenderfer: Yep.

[00:17:11] Camille Morhardt: It’s like, “Nope, you prioritize safety, so I’m not going to go at all.”

[00:17:14] Mykel Kochenderfer: Yeah. We also have to really understand that for these safety critical systems, the goal is not to drive the probability of failure to zero, right? If someone tells you that they did that, then they’re lying to you or their models don’t really capture the full spectrum of what might actually happen. The reason for that just goes back to the fact that sensors are imperfect. With some probability, those sensors will fail. And also when you have other agents in the environment with you, it’s often impossible to perfectly predict what they will be doing.

[00:17:58] Camille Morhardt: Tell me about some of the spectrum of research that your lab is looking at.

[00:18:04] Mykel Kochenderfer: It’s pretty broad. It turns out that decision making under uncertainty, which is what our lab does, it connects with many, many different applications. Of course, we sit in the aerospace department. And so we have looked at both aviation applications involving air traffic control and air-to-air collision avoidance and drones. We’ve looked at space technologies on how do you produce robust plans for satellite sensing. We work on autonomous cars. We’ve had great collaborations with Toyota, Ford, Bosch, Honda, and many others over the years. We’re also interested in wildfire fighting. That’s another area that I mentioned has a lot of uncertainty. For an example, we looked at how would you intelligently use drones to monitor the evolution of a wildfire? And then on top of that, how do you appropriately allocate resources to fight that fire?

We’ve also applied our methods to scientific discovery. Right next to Stanford is SLAC, the linear accelerator center. One of my PhD students has been collaborating with them on using our techniques to control an x-ray machine for examining a specimen. You have control over how do you move the x-ray beam, what aperture to use and so forth over time. You want to make these decisions to maximize scientific value. We’ve also developed an autonomous cane, a cane that has a lighter sensor and a camera on board that can help steer someone who is blind around obstacles to get them efficiently and safely to their destination.

[00:20:14] Camille Morhardt: The one other thing that you had mentioned is you’re looking at carbon sequestration. Tell me about that.

[00:20:21] Mykel Kochenderfer: There’s a tremendous interest, of course, in sustainability and climate change. If we want to have a net zero emission of carbon, carbon sequestration has to be part of the equation. And so we’ve been working with others at Stanford with expertise in the earth sciences on how do you safely sequester carbon. So safely sequestering carbon requires kind of making inferences about what’s happening in the subsurface. You want to sequester the carbon in a way that it will stay there for a hundred or more years. If the carbon comes up, then it goes back into the atmosphere. But also since it’s carbon dioxide, it can lead to suffocation. So this is something that we need to be able to do extremely reliably.

[00:21:29] Camille Morhardt:  What’s kind of one of the biggest arguments that’s out there right now in among people who are designing autonomous systems? What are they disagreeing about?

[00:21:38] Mykel Kochenderfer: I think there are disagreements along every part of the chain on the sensor systems. So what sensors should be used on autonomous vehicles. And of course, there are lots of engineering discussions about the trade offs between cost and error characteristics and so forth. There’s a lot of discussion and disagreement about how much of a role neural network should play in safety critical systems. For some kinds of processing like image processing, natural language processing, it has to be neural networks. No other known technologies that can do what neural networks can for things like object recognition or speech recognition. 

But there’s also a temptation to use neural networks for making controlled decisions. So after the image processing and so forth, how do you take that situational awareness into a decision should that process use neural networks? So we as humans, we have our biological neural networks and we’ve built up confidence that our biological neural networks are adequate for us to fly aircraft and drive at least to some extent, but maybe more interpretable methods would be better for the decision making systems.

[00:23:12] Camille Morhardt: So the downside of the neural network isn’t the decision-making quality of it. It’s that it’s not as explainable or interpretable by people?

[00:23:19] Mykel Kochenderfer: Yeah. Explainability and interpretability is a major challenge when using representations like neural networks, though a lot of research labs are very interested in figuring out how to make neural networks a bit more interpretable. Sometimes what they do is they produce what’s called a surrogate model. So they modeled the decisions that the neural network makes but in a representation that might be a little bit easier for humans to understand, like a decision tree or something like that.

[00:23:54] Camille Morhardt: Right.

[00:23:55] Mykel Kochenderfer: And so they can look at the decision tree to get kind of a rough idea as to what the neural network is doing. It’s not a perfect representation. That’s why it’s a surrogate model.

[00:24:05] Camille Morhardt: Right.

[00:00:00] Mykel Kochenderfer: But at least it can give enough of intuition that we may be able to have a warm feeling in our hearts that the system we deploy will behave sensibly.

[00:24:06] Camille Morhardt:  Is there anything that I haven’t asked you that I should ask you that people should know about autonomous systems?

[00:24:22] Mykel Kochenderfer: Two major things will contribute to the safe deployment of these autonomous systems. One is modeling and simulation. That’s going to be key. You can’t flight tests, you can’t drive tests everything. Flight tests and drive tests are useful for validating the implementation and collecting data on like sensory error characteristics and so forth. But you don’t want to be testing your safety critical system in the real world and have that be part of your design process. As much as possible, you want to do the design of these safety critical systems in simulation. And in order to do that well, your models have to be trustworthy and you need to validate those models.

The second point that I want to make is we always want to gradually deploy our systems and develop our confidence in the system as it goes along, right? So if we’re a delivery drone company, we’ll want to do small, restricted tests in less populated areas. We do that for a long period of time and build up an understanding of the failure modes before deploying in San Francisco or New York City. Waymo has taken a similar kind of approach before deploying in Boston, where it’s snowing. They have collected a huge amount of data in California and Arizona where the weather is more predictable. You still have a lot of complexity in California and Arizona with human drivers with varying levels of expertise and competence, but you always want to take baby steps when developing these systems.

[00:26:18] Camille Morhardt: Right. Introduce ice next week.

[00:26:20] Mykel Kochenderfer: Yeah. You want to do that very, very gradually.

[00:26:24] Camille Morhardt: Okay. Well, Mykel Kochenderfer, professor at Stanford University and a head of the Institute for Human Centered Artificial Intelligence, thank you so much for speaking with me today. I appreciate it.

[00:00:00] Mykel Kochenderfer: Thanks so much, Camille.

More From