Skip to content
InTechnology Podcast

What That Means with Camille: Security Trends in AI and Confidential Computing (122)

In this episode of What That Means, Camille gets into the latest trends in security with Ron Perez, Fellow and Chief Security Architect at Intel. The conversation covers how AI is being used for security, how security is being developed for AI, how resiliency and confidential computing go together, and more.

To find the transcription of this podcast, scroll to the bottom of the page.

To find more episodes of Cyber Security Inside, visit our homepage. To read more about cyber security topics, visit our blog.

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Follow our hosts Tom Garrison @tommgarrison and Camille @morhardt.

Learn more about Intel Cybersecurity and the Intel Compute Life Cycle (CLA).

How AI Is Being Used for Cyber Security–and How Security Is Being Developed for AI

Artificial intelligence is nothing new, but how we approach AI is always changing, particularly when it comes to security. Today, there are two aspects to consider when it comes to AI and security: how we use AI to enhance security and how we improve security for AI. Constant software and hardware updates require constant improvements to AI security.

How to Improve Resiliency from Cyber Attacks, Especially with Confidential Computing

How can organizations bounce back from a cyber attack? And how do they prevent attacks against both outsider and insider threats in the first place?

Ron and Camille discuss how resiliency can take on different forms, how organizations must expect cyber attacks to occur, and how to assess the ability to recover before an attack. Confidential computing comes in here to prevent the wrong people from gaining access to sensitive data, particularly in the age of edge computing.

Ron Perez, Intel Fellow and Chief Security Architect

With over twenty years of experience in the tech world, ranging from cloud computing to semiconductors, Ron Perez brings a wealth of knowledge to the ongoing discussions about security and technology. He has more than thirty U.S. Patents and has had many publications on the latest and most pressing topics in security. Today, he is a global leader in security as an Intel Fellow and Chief Security Architect.

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

[00:00:36] Camille Morhardt: Hi, and welcome to Cyber Security Inside.   Today we’re gonna do a What That Means podcast on Trends in Security.  And we’re going to have the conversation with Ron Perez who’s a fellow at Intel and also Intel’s Chief Security Architect. Enjoy the conversation.

So AI, everybody’s talking about it. So here’s one question about it. I know that at the same time, AI is being used to enhance security. I’m going to ask you how. I know it can crawl through hardware and software and look for vulnerabilities, but I hope you can say more about it. It’s also being used to look for vulnerabilities from the other side, right, to get in. Can you tell us what is it predominantly doing now, both good and bad, specifically around cyber security? Where do you think it will head?

[00:01:25] Ron Perez: Yeah. So there are two aspects on that, and you touched on these, right? There’s AI for security. How can we use AI to enhance, find vulnerabilities or to determine malicious behavior in some application? Then there’s security for AI.

[00:01:41] Camille Morhardt: Yeah, yeah.

[00:01:43] Ron Perez: That one is more focused on how can I trust that the AI, the machine learning, is doing what it should be doing. Has the model been poisoned by somebody? Which is very easy to do, which will give you the wrong results. Can the AI be explained? Can we actually determine and is it deterministic, do we get the same results every time? So there’s that aspect too. How do we protect the model to make sure that it is the correct model? Here’s where such competing technologies can come in again to provide those levels of protection and to provide protection for the inference capability itself or the training capability.

Going back to the first part of your question, there’s the AI for security. We see a lot of uses cases there. You talked about the threat detection technology on a previous podcast with Ram Chary where they’re using AI to determine where there are malicious workloads running by querying the various performance counters inside Intel’s platforms and characterizing what a bad workload looks like, what a role hammer attack looks like from a performance counter perspective so that they can determine if this malicious software is running on that platform.

[00:02:53] Camille Morhardt: Yeah, I use to think of all the dynamic ways of looking for threats as kind of software-based or maybe it was an anti-virus software or something where you’re constantly updating with what you’re learning all the time. Now it’s like, that’s actually true for hardware too now. We have to constantly be looking at new kinds of threats. Every time you find a way to discover something that might be doing something, I’ll just say bad, close on its heels are ways to disguise what’s being done bad so that you can’t find it. So it’s kind of this constant updating process.

[00:03:28] Ron Perez:  Absolutely right. Because we talked about the global scale of computing a lot of efficiency, you’re seeing a lot more of the hardware being instrument with telemetry so that we can determine how to configure the system to be the most efficient. So this telemetry is fantastic from one standpoint, that it gives us a lot more data that will help us determine the system is being used inappropriately.

On the other side to it all, it potentially creates a lot of side channels that you could leak information about what the system is doing. Maybe that’s good if you’re looking for malware, for example. Maybe that’s not good if you’re actually doing some sense of computation where you may leak the contents of a cryptographic key, for example. So we’re constantly balancing the two halves of the technologies, the technology used for good and sometimes used for bad, too.

[00:04:20] Camille Morhardt: What is your impression of the word “resiliency” when people are using that in a compute space? I’ve heard this come up more and more recently where it sort of use to be … To me, it was like protection, detection and then resolution. Now, the word is resiliency. It’s like everybody says, “Look, you will be attacked. You will be breached. It will happen to some degree. It could even be as low as one employee clicking a bad link, and then it’s done. But something’s going to happen.” So how do you become resilient? What does that mean and how should people think about that when they start to map out resiliency?

[00:04:58] Ron Perez: Yeah, I tend to think of it more as the ability to recover, but it’s all the capabilities you need to recover quickly. It may be, do you have a golden copy of your firmware or of your operating system or whatever software you care about, do you have that some place that you can reload quickly if the copy that’s running gets corrupted or something? How do you detect corruption? Do you have that capability so that you can, again, recover quickly from that corruption? That’s what, to me, resiliency is focused on.

Now, doing that at scale on a global basis, that’s the challenge. I mean, it’s a challenge enough on one system, but having the ability to do all those things, being able to stage the software and keep copies that you can easily bring back into the system and reload, being able to actually just reboot the system in a timely manner is part of resiliency too, being able to ride through different attacks or outages.

[00:05:56] Camille Morhardt:  That’s another question, is resiliency always about coming back to your 100% functionality level or is a lot of it now about staging that out and saying, “What my minimum viable survival and how long can I do that?”, and then getting to the next level, whether it’s you have to purchase entirely new systems or something. How do you go about mapping that?

[00:06:20] Ron Perez:  That’s an interesting discussion. I’m not sure if I know the answer, but I know that … In my earlier days, I spent some time working on a fault tolerance system. That’s a whole different area. I think in the fault tolerance world, they realized also that they try to ride through errors basically having redundant systems everywhere else in the system where it’s possible. For a failure, you want some redundancy. That was great, but I think they also realized after awhile that in some cases, it’s better to fail fast and just restart than it is to constantly try to ride through, because the more you try to ride through different failures, you kind of build up this history of not side channels so much as side effects, I guess, that linger and maybe come late in issues later on and lead to just an avalanche of failures at some point.

At some point and in some cases, you just want to actually stop and restart. There’s a mix. Knowing when you can ride through, when you should ride through and when you should really just restart, that’s kind of the trick. That’s getting harder and harder I think.

[00:07:27] Camille Morhardt:  How do you know when you’re being attacked? Sometimes it’s obvious. You can’t even access your system and now you’ve got a ransomware request, but other times I expect it’s not so obvious. Maybe in fact, there’s diversion techniques happening where you’re focused on one problem while something else is actually being siphoned off somewhere else. So how do you keep aware?

[00:07:48] Ron Perez: That’s a good question too, and another one I’m not sure I have all the answers to. Bringing it back to confidential computing, I would say that’s the beauty of those types of technologies is that they assume that you’re going to be attacked. Really the goal now is to make sure that even in the environment of constant attacks, you can detect if your data or your code has been modified, which is really what you care about, that it’s no longer the thing I thought it was or the thing that it should be and/or you will stop running if there’s a attack that goes beyond, that breaks the properties confidence computing establishes, that confidentiality piece that the system, the hardware, the thing that we trust at the bottom basically says, “You’ve broken this boundary and I can’t allow this software to run anymore.”

That’s a good thing. It may be an availability problem for you because you need this workload to be running, but I think in many cases you would rather that my sensitive workload and access to this data cease rather than be exposed. So I’d rather that it’s just stopped than it be exposed.

[00:08:53] Camille Morhardt: Do you classify every kind of different device and application, et cetera, all within your network or company or organization individually so that you know what you’re okay to just turn off and what you’re okay to lumber through and try to have access? Do you do it by human or by device? Obviously human have multiple devices now, so how does an organization think about that?

[00:09:21] Ron Perez: I think companies have different approaches to this. A cloud service provider that’s really interested in this app scale capability may do it very different from some smaller enterprise. The enterprise can focus more on, “Where do I want redundancy? Where do I need high availability components?” Again, in a cloud environment where they really focus that scale, they assume that a number of components are going to be failed at any second, somewhere in my data center, there’s going to be failures. I’ve got to be able to overcome that, to ride through that. They address that through, in a large way, through software redundancy or software-based redundancy. Just have multiple copies of the same thing residing in different parts of the data center or across different data centers.

[00:10:07] Camille Morhardt: How do you go about protecting these legacy style systems and/or even physical environments where things like servers or personal information are held?

[00:10:17] Ron Perez: The last part of that question is probably the most interesting, to me anyway, because we’re seeing the cloud, which was pretty much established as a glasshouse environment. These are very secure data centers or mega data centers, but they’re very secure. There’s only a few of them in any particular geography. Now we’re seeing, in large part because of technologies like 5G which is really addressing some of the latency issues to the end user, we’re seeing a lot more interest in moving workloads and the data farther out to the edge, edge computing.

That’s great for all of us. It means more redundancy, more duplication too, because you never know where you’re going to be or where the data’s going to be needed any time of the day or any place in the world. The bigger issue is the last part you asked about, was the physical access. As you move the workload and the data out to the edge, you start placing these servers or these computing environments in areas where they could be exposed to more attacks and physical access from the bad guys. Whether it’s a server hanging on a telephone pole or in some base station some place that’s just behind a locked door, whatever it is, they’re going to be more exposed to those physical attacks. So we’re seeing a lot more interest in those physical protections. How can we provide physical attack prevention and detection capabilities and still not impact the overall cost of these solutions?

[00:11:46] Camille Morhardt: You’re not just talking about physical in terms of I get a sledgehammer and I bash in the metal door and now I have physical access. You’re talking even about sitting down with a laptop within 20 feet or within a couple of feet of a server and somehow being able to access information.

[00:12:05] Ron Perez: Exactly. Exactly. Either through EMI leakage, power analysis, even just walking away with the memory DIMMs from a system.

[00:12:16] Camille Morhardt:  What about insider threats? What do you think about those? Is there really a way to protect other than sort of monitoring people doing very unusual behavior, like sending out a whole bunch of attachments to their personal account?

[00:12:30] Ron Perez: Yeah, I think insider more from an admin capability. Right?

[00:12:34] Camille Morhardt: Yeah, like a human who’s suddenly turned against you.

[00:12:40] Ron Perez: Yes, yes. Exactly. I think, again, that’s another area that’s driving this trend around confidential computing is you don’t want to have to worry about the admin for any particular system or data center. They should be focused on ensuring that the systems stay up, that they’re efficient, that they have the latest software, et cetera. The fact that they could I have access to the data is a concern. Just by separation of privilege, they shouldn’t have that capability. Confidential computing essentially takes that privilege away from them. All they need to know is that, this is how much memory I need, this is how many CPUs I need or this is how much bandwidth I need for a particular workload. I don’t need to see what’s in that workload. I don’t need to have access to that.

[00:13:23] Camille Morhardt: That’s very interesting. So removing the human from the equation entirely when it comes to very sensitive information.

[00:13:30] Ron Perez: Exactly. Yeah.

[00:13:31] Camille Morhardt:  What other sorts of threats do you think are on the horizon that I haven’t thought of?

[00:13:37] Ron Perez: I think we have covered the main classes. The physical attacks is a big one, just the scalability and complexity that comes with that at this global scale, those are the basis for a whole set of concerns. The geopolitical aspects that we’re seeing more these days as well, I think because of what’s happening, we’ve kind of been more globalized now and people are kind of rethinking that. Not that they’re going to back off from being globalized, because we kind of have to do that, but what happens if things change from a geopolitical standpoint? Do I have the ability to pull back my data from some geography? What happens if law enforcement in that particular country or geography decides to seize all the contents of a data warehouse or of a data center? Do I have a plan in place for that? So we’re seeing a lot more interest in those types of questions.

[00:14:28] Camille Morhardt: What would the plan be? Locking down the data or deleting the data yourself?

[00:14:33] Ron Perez: Yeah, I think you’d want some assurance that you can’t stop somebody from coming in and just taking the servers, especially if they are representing law enforcement, but you want some assurances, just like with encrypted data at rest, you want some assurances that they can’t actually see the data anymore. It has those protections we’ve been talking about. They can kind of see that there’s something running there, but they can’t see what’s inside it.

[00:14:57] Camille Morhardt: That’s very interesting. Okay, we didn’t talk about … Well, two things we didn’t talk about that are kind of buzzing around in my head, quantum compute and quantum cryptography. I don’t know that we need to talk specifically about how it works and the algorithms, but what about how compute in the world is going to change and everything going to become exposed. Are people thinking ahead enough on this front?

[00:15:23] Ron Perez: “Enough,” I’m not sure about, but people are definitely thinking ahead. This has had a competition now for quite a while, the National Institutes of Standards and Technology, and the Europeans are doing the same thing on post-quantum crypto algorithms. I think we’re coming to a head on some of those right now. That will address some of the concerns around quantum computing being put to the task of breaking modern crypto. So we’re going to have these post-quantum crypto algorithms and capabilities. We’ll start to see some of those. In fact, some of those are already being rolled out even before the standards are set as various companies start playing with these technologies to see what the impact is.

For many other technologies, it’s just going to be an issue of extending the key lengths of the crypto systems that we use today to make it that much more difficult to brute force break those crypto algorithms.

[00:16:18] Camille Morhardt: At least an interim mitigation. Okay.

[00:16:20] Ron Perez: Yeah, yeah.

[00:16:22] Camille Morhardt:  What is your take on supply chain security? That’s another topic that kind of gets thrown around a lot. Do you have any pithy advice on that front? I know it’s a gigantic-

[00:16:34] Ron Perez: No, other than there is a lot more interest, a lot more work going on in that area. So that is, again, oddly related to confidence computing in so far as that out-of-station capability, the ability to attest the environment that you have, we’re seeing similar sorts of capabilities that have … some of them have been developed for quite a while now, to where you can basically attest the different components of a system as it moves through that supply chain so that at the end, you get some assurance that these are all authentic components, they’re all the right pieces of the system that have been assembled in the right way for this particular system that I care about, and of course, at scale for all the systems in a data center. So I think that’s going to be an exciting area as well. Now, making sense of all this data that we’re going to get is going to be the next challenge. That’s where AI comes in.

[00:17:27] Camille Morhardt: I think sometimes when people say AI or machine learning, it’s sort of scary. It’s like, “Oh my god, we don’t do that. We don’t have that kind of thing running here.” Yet, we do know we have manual processes that could definitely be improved. Is there a early step in automation that companies can take that’s not full blown AI, “I’m going to train my own central model and deploy it within my company,” which is intimidating, I think, depending on how big you are and what kind of staff you have?

[00:18:00] Ron Perez: The problems that arise from the complexity are what the number of, what we call, measurements, or just think of it as a way to identify a particular piece of hardware or firmware, version of the firmware, version of the software, think about all the different software components that go even into an operating system. You end up having tons and tons of these measurements or fingerprints, if you will, that are unique for that piece of software or hardware.

Now, the challenge is, how do we make sense of all those? How do we know which ones are the right ones? We may have a white list, if you will, like these are all the good measurements, but there are probably many versions that are good and a few that aren’t good. Maybe even the combination of having a bunch of good measurements used in the same environment could lead to bad results. That’s where some very basic AI capabilities, if not just statistical asset management type capabilities comes in keeping track of all these different measurements. In a cloud environment, you can maybe limit it by saying, “All right, we’re always going to be up-to-date. So there’s going to be a limited number of measurements that we have to focus on or worry about,” so that the problem doesn’t become too intractable.

[00:19:14] Camille Morhardt: Ron Perez, Intel Fellow and Chief Security Architect, still makes me laugh when I say that. That’s just-

[00:19:21] Ron Perez: Me too.

[00:19:23] Camille Morhardt: … such a giant title. I don’t know how you even get up and go to work. I think I’d be discouraged even with the sound of that title. Thank you so much for joining us.

[00:19:32] Ron Perez: Thank you. My pleasure.

More From