[00:00:36] Camille Morhardt: Hi, and welcome to today’s episode of What That Means. Today, we’re going to do Confidential computing with Amy Santoni. She’s Senior Principal Engineer at Intel in charge of Xeon Security. So welcome to the show, Amy. It’s nice to have you here.
[00:00:53] Amy Santoni: Thank you.
[00:00:54] Camille Morhardt: Before we get into the definition of confidential computing, I’m hoping you can just tell us what is Xeon for those people who may not be familiar with it.
[00:01:02] Amy Santoni: Sure. Xeon’s a line of processors that Intel produces that’s targeted for data center usages. The traditional data center usages are the multi-sockets that could be used either by enterprise or club. We also use the Xeon brand for new capabilities that are coming like for 5G routers. We have a line of [inaudible 00:00:45] for 5G base stations. We also have a line that’s targeting networking, so we have different lines targeted for different data center usages, but all supporting that data center transformation that’s happening right now.
[00:01:35] Camille Morhardt: So we’re in the server space?
[00:01:36] Amy Santoni: Server space. Yep.
[00:01:39] Camille Morhardt: Okay. So will you also then just do us the favor of defining confidential computing in a couple of minutes, that is a buzzword that’s all over the place.
[00:01:47] Amy Santoni: So confidential computing’s really about protecting the data while it’s being processed. So if you look at the journey we’ve had with data, we started encrypting data at rest, Swarm DISC, the hard drives. We’ve been encrypting data as it transports. So when I go from my laptop to a website HTTPS, that is the secure network layer. And so now the next generation is, “Hey, how do I protect the data while it’s being processed?” We have D-RAM on the local computer. How do I make sure it’s protected while it’s in the D-RAM and processing in the CPU?
[00:02:24] Camille Morhardt: Okay. So it’s been an evolution from beginning with it in storage, beginning with it while it’s being transmitted and now we’re looking at actually while it’s being used.
[00:02:35] Amy Santoni: Yeah. So confidential computing is really focused on while it’s being used. So the other one we’ve solved and then confidential computing the new part is, “Hey, while I’m computing on this data, let’s make sure it’s confidential and it’s protected.”
[00:02:46] Camille Morhardt: Why is that the third one that we’re looking at?
[00:02:49] Amy Santoni: I think it followed the attack vectors. If you think about how malware started. It started corrupting things on your disk. And then people started putting sniffers or using things at the network site to intercept things between point A and point B. And so this is where the attacks are going and where we need to start protecting. So it was a following the attacks and getting increasing levels of difficulty.
[00:03:13] Camille Morhardt: So what are you… What sort of use cases do you think it’s going to be enabling? Or what are you seeing at enabling already?
[00:03:21] Amy Santoni: There’s lots of different use cases, but the one I’m most excited about are the ones that are enabling new capabilities, new data sharing among different entities while preserving privacy. So we call that privacy-preserving analytics is the buzzword, but really it’s like saying, “Hey, I’ve got hospital A hospital B, they both have a lot of data on patients.” And let’s say, I wanted to, one of the examples that’s recent is COVID x-rays. So I have all these x-rays and I can share and I can put it into an AI model and I can train an AI model to look at these different x-rays and improve the accuracy of the x-ray and automate it. And I can get data from all the hospitals to train these models. And I’m still preserving the privacy of the patients who have the x-rays, because I’m getting the data about the x-ray doing it to train the model that can then be shared by all these hospitals, but I’m still preserving the privacy of all the individual patients that are using to train that model.
[00:04:21] Camille Morhardt: So a couple of the terms that come up, if you look at confidential computing are secure enclave and trusted execution environment. Can you explain what those are in the context of confidential computing?
[00:04:33] Amy Santoni: Confidential computing, it involves let’s call it three main vectors, or three vectors that we focus on. One is protecting the data. When it’s in the DRAM, we have encryption to protect the data. So if anyone’s steals your DRAM and tries to dump it, they’re not going to see the plain text data. So there’s an encryption part of protecting the confidentiality of the data while it’s sitting in DRAM. Once it comes from the DRAM into the CPU it’s decrypted. And so then we need to create a hardware-based environment to protect that code and that data running on that CPU from other code and data running on that CPU. And so that’s the trusted execution environment. It’s a new environment and new hardware protections to protect the code and data within that trusted execution environment, secure enclaves is a particular trusted execution environment.
And so it protects that code and data while it’s being processed within the CPU. And then once you’re done processes, if it may go back to DRAM and may go back to disk, but it’s the construct of protecting hardware protection within the CPU. The third factor is how do I… let’s call it the person writing the software that wants to run in this trusted execution environment? How do they know it’s running on genuine good hardware. Because with all these virtualization techniques and emulators, we want to make sure that no one can trick the software to say, “Hey, you’re running on Xeon, so we have some cryptographic credentials.” So think of it as like certificates that say, “we are Intel and we’re genuine Intel.” And it talks directly to the hardware. So even if the OS or VMM, were trying to spoof it, we have protections in place to make sure it’s non-spoofable. So you got the, am I running in a good environment, the hardware hooks to create the environment and then protecting the data while it sits in the DRAM.
[00:06:24] Camille Morhardt: Okay. That’s like a whole lot of stuff. That’s like a three part series of protection layers. So why wouldn’t we put everything in a trusted execution environment, or why wouldn’t we put absolutely everything we’re doing in a computer? Why is there anything that’s not part of it?
[00:06:49] Amy Santoni: Great question. And you’ll probably get different answers depending on who you talk to. So I’ll just fully acknowledge that. So there’s a couple of considerations. One is nothing’s for free. So when I add encryption or put things into a new construct in the CPU, there’s some performance cost to it. So there is a performance, a loss we try to minimize it. We try to keep it low, but it’s not free. There’s also software enabling that has to be done. So the software has to understand this new hardware construct, this new enclave or this new trusted execution environment. So there’s software enabling to do that. We try to work to make things easy, but it depends on how important it is to do that software work that people may do it. So those are two considerations that people have. And the third is just how confidential is the data. Maybe the data’s not confidential enough, or they’re not worried about that data. And so why take the extra work or performance costs to do it? So it varies based on those considerations.
[00:07:48] Camille Morhardt: Obviously very confidential or personal data you would want to put in there based on what you just said, but are you putting it in at the detailed data level? Are you choosing an application that you’re putting in? Are you picking an OS because of course you’re talking about a server, so you could even have multiple OSs on a single server. So what level are you making your decision at?
[00:08:12] Amy Santoni: So different trusted execution environments have different levels. That’s one. So at Intel we have Software Guard Extensions, and that’s targeted for application writers. It runs at the application level priority within the CPU, and you can put your whole app in it, or you can split your app into like, let’s call it trusted and untrusted parts, depending on how much software development. There’s to other trusted execution environments that work more like an OS layer. So it includes the operating system and all the applications that run on top of that operating system. And that’s another way you can draw the boundary. And so those are the two that I’m aware of. They tend to be at that level. So with the Software Guard Extensions, applications tend to be broken up into chunks of DRAM at four kilo granularity. So you get to choose the granularity within your App, how much you want to put and trusted and untrusted.
[00:09:05] Camille Morhardt: So the App developer is the person who’s deciding what portions of the App, not the end user?
[00:09:12] Amy Santoni: It can be, or they could put the whole app. There’s an Open-Source project called Gramine that tries to make it easy, to take your whole application and put it in a container. And so let’s call it the amount of enabling using like a Gramine goes way down. If you’re writing a security focused application and understand all of the constructs, you can split your App into these trusted and untrusted parts, but again, the level of detail and the level of software enabling is greater in that second case, but it reduces, this is the attack surface to the smallest possible one, because you’re just cutting out a part of your app and saying, “this is the most critical part that I want to protect.” And all the rest of the App is untrusted. It can’t get to that data and let’s call it a vault.
And so you can create a little vault within your App. And so there’s trade offs between how much software development you want to do to protect, let’s call it a portion of your App versus, if I just took my App today, wrapping it in some container that’s in a trusted execution environment still gives you some extra protection that you wouldn’t have had if you didn’t wrap it. And it could go more granular. So it depends on the trade off of what you’re trying to protect.
[00:10:28] Camille Morhardt: So a question for you on, there’s these two different trends, I would say that are both happening simultaneously, there’s this decentralization or distribution of data that we see with blockchain. And we see with again, some emerging use cases within artificial intelligence and machine learning, like federated learning. Then on the other side, we’ve got this major push to putting a lot of data in cloud service providers, which seems more of a centralized a data approach. So how does confidential computing play into each one of those?
[00:11:04] Amy Santoni: So I’m most familiar with let’s call it the model, I guess it’s the centralized model or the cloud computing, but in some sense, for people who move to the cloud, let’s say they had a private set of servers. And again, I’ll talk servers, because that’s what I know best. But they had a private set of servers that they owned and maintained and that cost money. You got to have the people service it. You got to have people keep the software up-to-date and stuff like that. One of the benefits of moving to cloud is I don’t have to have my own on-premise computing. I can go to cloud. One of the promises of confidential computing or what cloud service providers are telling us is, “Hey, by offering confidential computing, it’s taking some of those customers who were reluctant to come to cloud before, because they are worried about their data of being shared and on the same system as other people’s data.”
The example, a lot of our marketing people are, be it the Pepsi and Coke, like their secret recipe for Coke and the secret recipe for Pepsi, and they both offloaded to the cloud and you don’t want the software for Coke to accidentally get the recipe for Pepsi and vice versa. And one of the things that confidential compute does is it hardens the virtual machines that Coke and Pepsi would rent and it makes it so that the data that is in that let’s call it container, whether it’s an enclave, some trusted execution environment, there’s different ones out there, as I said, but it makes sure that, “Hey, the Coke stuff is encrypted differently or has access control that’s different from the Pepsi one.” And so the data is not centralized, meaning the customer–the company’s renting from cloud–still own the data.
But what confidential computing’s bringing is extra confidence that I can take these things that maybe I wasn’t comfortable taking to cloud before, move them to cloud. And I have this extra hardware layer of protection to keep my data private from other people running on the same machine, but also from the cloud service provider from that virtual machine monitor that happens to be running, that’s owned, let’s call it by the Googles or the Microsofts. Even if their software had a bug in it can’t leak the data of this confidential computes owners. That’s the promise that the cloud service providers want to offer and they believe that’ll help move some people who are reluctant to come to cloud, to move to cloud, because there’s this new construct and new hardware and new capabilities around it.
[00:13:35] Camille Morhardt: Was there a kind of catalyst to making confidential computing a reality? I mean, you had mentioned that technology following a variety of vulnerabilities as it went to protect data at rest and data in transit. And now finally data as it’s being processed. But I mean, I’m thinking of course COVID, because so much stuff had to move to the cloud quickly, was that a true catalyst and have there been others?
[00:14:03] Amy Santoni: So COVID, I think helped make people realize that, “Hey, I need this agility.” I need agility to change, how I do my computing because all of a sudden computing needs went way up during COVID with all the remote. I think that there was a push for this even before COVID, the move to protect the data while it’s being computed. I think people have recognized that for a while. I don’t know if that I have a good example of a catalyst other than the one I’m familiar with is, like I said, we’ve called it the Snowden effect. When people realized that the government could get to some data, they didn’t think they could get. To me that’s what raised awareness and then people started saying, “I need some protections in place.” At least that’s the catalyst I’ve seen. I don’t know if it’s the catalyst, but it’s the one I’ve seen in my experience.
[00:14:52] Camille Morhardt: Okay. So you’re saying not only is this data protected and secured down at the hardware layer, but not even could the cloud service provider access that data, let alone a leak to somebody another software, but not only the cloud service provider on whose hardware this is even running cannot access it.
[00:15:15] Amy Santoni: Right.
[00:15:16] Camille Morhardt: Or any other third party?
[00:15:17] Amy Santoni: I don’t think I can say it’s impossible, but it would take advanced tech hardware level techniques to be able to break some of these things. Because nothing’s a hundred percent unbreakable. Let me put it that way. But the bar went way up. Prior to confidential compute, the OS or the VMM had access to the applications data. They did it to make the software do what it was able to do. And what we did is we’re hardening, let’s call it a layer around the application or even around in a virtual machine around the virtual machine with its applications in it, to prevent the virtual machine software, to access the data within there. Also we’re looking at, let’s call it management software that may run on that. It’s protected from that. And it’s protected from cloud administrators to be able to see the data. So that’s why the cloud service providers believe providing these extra layers of protection will grow their business. Because some people who wouldn’t have moved their data into an environment they didn’t feel they had enough control over the protection of their data, they may now move their data there.
[00:16:25] Camille Morhardt: I see what you’re saying. Yeah, it’s an added layer. So when you say management software, are you talking about basically provisioning the user or provisioning the operating system?
[00:16:35] Amy Santoni: Yeah. Like someone with admin privileges to the server still can’t get to this data because the hardware constructs are in place are to prevent it.
[00:16:46] Camille Morhardt: We talked before about how we don’t put everything in this environment. Is there a trajectory over time where we would ultimately have everything encrypted and protected while it’s in use? Just because there’s no longer a performance constraint?
[00:17:01] Amy Santoni: That’s possible. I mean, we’ve heard Microsoft say they think that the majority of their cloud, let’s call it infrastructure as a service will be running in some trusted execution environment, let’s call it this decade. So that’s the projections like the growth projections for the growth of confidential compute vary from like five X to 20X. I mean, it’s not a science, but so it’s definitely growing when that crossover point of more things, not encrypted and more things encrypted. I’m not sure where that is, but the predictions I’ve seen say, 2025, 2026, and not that far away, but it’s predictions. So how believable it is you never know.
[00:17:46] Camille Morhardt: What other things or what generally, if you’re in charge of security for Xeon, what is the spectrum of things that you’re looking at in your role of the things you’re allowed to talk about publicly?
[00:18:00] Amy Santoni: I tend to look at how do I harden the foundation of our the boot cycle. So how do I know that I’m booting what I expected to boot and all of that firmware and data that I load is authentic. So how to securely bring up the processor and the platform is where I spend some time and there’s been new shifts in the industry, where people are adding these external roots of trust that want to gather information all the time, almost like a heartbeat thing, like, “Hey, what are you running now? Hey, what are you running now? Did I authorize that? Did something change? And if it changed, was I aware of that?”
So there’s some new industry trends there of how do we standardize that communication from these diverse sets of roots of trust that may be on different platforms from client all the way to server, to the different components it wants to ask information from. So I spend some time on that. Spend time on the confidential computing. And then the third vector that we look at is memory safety. How can we help software be safer? Software’s complex, software’s got many lines of code. How do we make sure, are there hooks we can put in hardware to help the software writers to make their software more secure, less vulnerable, to known software attacks. So those are the three vectors that I tend to spend time on.
[00:19:23] Camille Morhardt: And are any of the newer trends also things like machine learning or artificial intelligence, are any of those emerging models or styles or mechanisms for compute or I should even probably throw in internet of things and ask you did those things change fundamentally how we’re looking at server. I mean the industry is looking at server security or do those things just fall into line with what’s already being looked at?
[00:19:53] Amy Santoni: You talked about buzzword at the beginning. And so that, cloud to edge is a big buzzword that you hear or things. And so, how do I protect the data, whether it’s being computed on the cloud or being completed more in a geographically dispersed environment and what’s the software to tie those two together, that continues to be an evolving thing. And so what we’re trying to do is make sure that all of those processing places along the path have some again, from a security centric point of view, have a trusted execution environment. They don’t all have to be the same necessarily, but have some protection. So whether I’m processing here or processing there, I have some protection for my data. I’d say the other thing that at least specific to the server that’s growing in importance is the physical attack protection.
With big data centers, there are some extra layers of protection that could exist. But then as I move computing closer to the edge or to the end devices that could be in a shopping mall or to improve people’s experience with their phone Facebook or Verizon, or they may co-locate some of their servers together in geographically different environments. And so they can’t necessarily know the physical protection around those servers if they’re putting one in the mall and a football stadium and whatnot. And so physical protection has grown in importance because you see the computing continuing to move more and more to lots of different places and diverse levels of how much protection exist in those environments. Play stations are on a pole somewhere. And so the physical attack protection is something that I’ve spent more time on and they have ramped on because, you could call that IOT or you could call it the 5G Roulette, or you could call it any of these different things, but that’s grown in importance and an awareness over the last few years I’ve been working in this area.
[00:21:50] Camille Morhardt: So when you talk about physical protection, are you talking about protecting servers from somebody with a baseball bat, or are you talking about protecting somebody from sitting near it with a laptop who’s hacking into it because they have a physical to a wireless signal or something?
[00:22:06] Amy Santoni: So talking about someone taking a probe, for example, and sniffing the data as it goes, let’s call it between the CPU and let’s call it a GPU and a discreet graphics card that link that it’s usually connected by a protocol called PCIE that would travel in plain tech. So if someone was able to sniff or read that data as it traveled, they could get in. And so recently in the PCIE consortium, there’s a new capability to protect that link. So data traveling from one computing element to the other, it can be protected from people sniffing it. And that was a new industry level thing. It’s things like that, protecting these links on the platform, not so much from a wireless signal, but physically accessing and trying to get the data.
[00:22:54] Camille Morhardt: Okay. Because you know in the classic sense of on-prem like you were talking about before the servers that were housing your data were, behind a barb-wire fence in the locked door access, only background checks all the rest of it. And of course, possibly even more secure at cloud service providers where this is a paramount importance and you’re saying, well, as servers make their way closer and closer to the edge in addition to those other locations, we have a new attack threat, and we have less concept or guarantee of the level of protection of every single one of those servers depending on whose in charge of its physical security and where it sits in the world.
[00:23:38] Amy Santoni: That’s right. Exactly
[00:23:39] Camille Morhardt: Cool. Well, Amy, thank you so much for your time today. I really appreciate the conversation getting the bottom of some of these words.
[00:23:47] Amy Santoni: Thank you for inviting me. It’s been a pleasure.
[00:23:49] Camille Morhardt: Again, Amy Santoni with us today talking about confidential computing. She is Senior Principal Engineer and in charge of Security for Xeon, which is Intel’s Data Center Product.