Skip to content
InTechnology Podcast

#95 – Project Amber: Intel’s Next Innovation in Confidential Computing

 

In this episode of Cyber Security Inside Live from The Green Room, Camille talks with Raghu Yeluri, Intel Senior Principal Engineer and Lead Security Architect from Intel’s Vision Conference in Texas. The conversation covers:

  • A high-level definition of Project Amber and overview of what confidential computing is.
  • At the core, confidential compute is where data and IP get processed and the need to be protected and isolated from the platform and the infrastructure administrators.
  • Why customers are worried about security as they move their workloads to the Cloud and how confidential computing can help address these concerns.
  • The three stages of data protection: at rest, in transit, and data protection in use. Most customers want an independent entity to verify the trusted execution environment to ensure it is trustworthy. That trust authority is what we call Project Amber.

And more. Don’t miss it!

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Here are some key takeaways:

  • Project Amber is a trust authority that verifies the trusted execution environment for data projection in use.
  • Confidential Computing is a new technology that helps provide data projection, especially as more people move to the cloud.
  • The industry is starting to converge on building confidential compute in the following approaches: Use of trusted execution environments and homomorphic encryption.
  • Trusted execution environments are a way to enable confidential computing.

Some interesting quotes from today’s episode:

“But the workflow required to verify this in a trustworthy way, is a complex operation. So, the question people ask us is, how do you assure to me that a service like Amber is doing what it is supposed to do? The verification of other trusted execution environments, in an integrity protected in a trustworthy way. How do I trust that you are doing your job correctly? We call that faithful verification.” – Raghu Yeluri

“Most enterprise customers don’t like to run in one cloud provider, they want to run their workloads in multiple clouds, some would like to work in Azure, IBM, and Google Cloud, for example. You don’t want to have a separate Attestation service.” – Raghu Yeluri

“If I have a client device that is trying to access a service in the cloud, I need to verify my trustworthiness to the cloud service. Before I get access to that service, I could be a bad actor, trying to access a good service that’s running in a trusted execution environment. And I can exfiltrate or infiltrate data from there.” – Raghu Yeluri

“Confidential compute, it’s the new technology focus for the industry right now, especially as more and more people are moving to cloud computing. Some people say it’s the biggest transition in computer security since the 1970s.” – Raghu Yeluri

Read more about Advancing Confidential Computing with Intel’s Project Amber from Nikhil Deshpande and Raghu Yeluri here: https://buff.ly/3w0Bi8X

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

[00:00:36] Camille Morhardt: Hi and welcome to today’s podcast. This episode is about Project Amber (aka: Trust as a Service). I have with me today Raghu Yeluri, who is Senior Principal Engineer within the office of the CTO at Intel. And he’s also the Chief Architect for Project Amber.  Welcome Raghu. 

So you’re going to have to start us off with just some kind of a high level definition of Project Amber and I’m guessing that’ll probably take us into a little bit of an overview of what confidential computing is since they go hand in.

[00:01:11] Raghu Yeluri: Yes. Uh, Camille, so, confidential compute is the new technology focus for the industry right now, especially as more and more people are moving to cloud computing. Some people say it’s the biggest transition in computer security since the 1970s. And the core of what confidential compute is, is as data and IP get processed, they need to be protected and isolated from the platform and the infrastructure administrators. That’s what the core basis of confidential computers. 

As more customers are moving their workloads to the cloud, they are worried that their data and their IP can get compromised in an infrastructure that’s not owned by them; they would like that to be isolated, they would like that to be protected from the platform, from the infrastructure itself. And what does confidential computer give customers? They can move more and more sensitive workloads to the cloud. They have the assurance that their data, their IP is not compromised.

Typically there are two ways the industry is starting to converge on building confidential compute. One is through the use of what are called trusted execution environments—TEEs.  And the other one is something called homomorphic encryption, which is about encrypting everything at all the time, even during execution, Intel is focused on both of those, but homomorphic encryption is very computationally intensive, and it’s still early stages in the industry.  So the industry and Intel are heavily focusing and investing in trusted execution environments as a way of enabling confidential computing. 

So where does Project Amber come here?  When you are doing confidential compute and when you’re running things in a trusted execution environment, the ground truth of security for this environment is provided through a process called attestation.  Attestation tells you that the trusted execution environment is genuine and it is running the code that you expect it to run inside the trusted execution environment, independent of the infrastructure provider. And that attestation has to be verified by somebody. 

[00:03:40] Camille Morhardt: Thank you for the explanation. And I wanna just, I wanna dig down in a lot of different aspects of what we’re talking about here.  First of all, when you say trusted execution environment, you’re talking about, okay, I’m going to take a portion of code or a portion of an application, or maybe even an entire application–if that’s how it’s been written–and I’m going to run it in an extremely secure environment, it’s going to be secure at the hardware level. It’s going to be its own little enclave, secure enclave, or trusted execution environment. And therefore that traditionally, when code is actually being processed, it’s being processed in the clear or un-encrypted until we go down the line, maybe in the future, when we get homomorphic encryption and things are always encrypted; but until then, we’ve had this scenario in the industry where, when you actually process something that is happening in the clear. 

And now you’re saying, “okay, no, we’re going to actually put the most sensitive information or application inside a trusted execution environment.  By doing that, nobody has access to that code while it’s running–including when you say an “administrator,” you’re talking about, say an administrator of a public cloud, for example. So even the owner of the infrastructure on which it’s running cannot possibly see that code if it’s sitting in a trusted execution environment.

[00:05:06] Raghu Yeluri: That’s absolutely correct. You know, the other way to look at it is, when you think about data protection, there are three stages to that: here is data protection, interest, data protection in transit. And then the third one is data protection in use.  The industry knows how to do the first two very well.  For decades we have done this; we have standards, we have broad adoption of those two. Okay. 

But when all the data, when all the code comes to a compute server, for example, our a accelerator card, it is sitting in the memory on that device in the clear.  In a public cloud environment, where you as an end customer, you don’t own the infrastructure, now you are open to you’re prone to compromise and attacks by rogue system administrators at the service provider, or it could be another tenant who tries to compromise the infrastructure so that they can get to the data and the code that’s running in memory that belongs to you.

[00:06:13] Camille Morhardt: There must be a, a solution in place today. I mean, what are, what are cloud service providers doing now in order to help? 

[00:06:22] Raghu Yeluri: Most cloud providers don’t do any protection today to the code and the data that’s in use. If a customer, if an end customer wants to protect it, they have to encrypt it themselves and then they have to decrypt it, uh, right before that code gets executed or that data gets processed in memory. But even though you do that, the key to decrypt it is still in memory and somebody can compromise that, as well. 

When you have a trusted execution environment, what it guarantees is the code and the data that you put in that trusted execution environment, when it is sitting in memory, it is access controlled. It is encrypted so that even if you have a rogue administrator, even if you have a compromised tenant, even if you have a compromised hypervisor, you still can’t access and decrypt that code that’s sitting in memory. The only time that code and data are decrypted and accessed is when they move into the CPU where they execute it.

[00:07:30] Camille Morhardt: OK, so then, if it’s in the trusted execution environment, how do I know that the trusted execution environment is what it says it is? How do I know that something else is not posing as a trusted execution environment and I open up all my code and run it? 

[00:07:45] Raghu Yeluri: That is the fantastic question. That’s exactly where attestation comes in.  The ground truth of the trustworthiness of a trusted execution environment is provided through this process called attestation. Most service providers who provide a trusted execution environment, they also provide the verification of that attestation. 

So it’s like, I am providing you the infrastructure and also telling you that the trusted execution environment I gave you is a trustworthy one. For some customers this is okay. But most customers who are dealing with sensitive workloads, regulated workloads and in regions where there are strict regulations for data protection, that assurance from the service provider is not sufficient. They want an independent entity to verify that this trusted execution environment, that Service Provider X gave you is trusted and it’s trustworthy and here is the proof that it is trustworthy.  And that independent, what we call “trust authority” is what Project Amber is. 

[00:09:03] Camille Morhardt: So does that have to be the manufacturer of the server or the processor in this case, we’re saying Intel has Project Amber. 

[00:09:14] Raghu Yeluri: It doesn’t have to be at all. Okay. The way the hardware and the trusted execution environments work, anyone can build a trusted authority to provide that independent verification of a trusted execution environment; but building, maintaining, and managing at-scale, a trusted authority like that is a very complex endeavor. You need to have intimate knowledge of the trusted execution environments, and you need to have access to all the platforms certificates, the identity certificates of the trusted execution environment. You need to have access to all the things that the clustered execution environment depends on for its own implementation. So all of that, it’s all available; somebody can build it, but it’s a very complex endeavor to build something to work at a scale that is required for broader adoption of confidential computing. 

Because, you are in the data path now, for example. So when somebody wants to talk to a trusted execution environment, they need this verification in sometimes in milliseconds, before it can go interact with; so if you need that scale, you need that concurrency, you need that high availability. And for most people it is not their core competency or it’s their core business also.  And that is where somebody like an Intel, somebody who is not in the operational. Path of the trusted execution environment is who should be hosting a service like this. 

[00:10:56] Camille Morhardt: Interesting. Is there any specific, I’m trying to think of the right word here, but because the processor itself is manufactured in, in, let’s say in one case by Intel, does that give Intel any other advantage in terms of verifying that that’s…. 

[00:11:17] Raghu Yeluri: No. It does not. All the elements for verification are available in a very, very standardized way from Intel and from other trusted execution environment developers, as well. But the workflow required to verify this in a trustworthy way, it’s a complex operation.

So the question people ask us, uh, Camille is how do you assure to me that a service like Amber is doing what it is supposed to do? The verification of other trusted execution environments in an integrity protected, trustworhty way, also. How do I trust that you are doing your job correctly. We called that “faithful verification.”

And every step of the process itself runs inside trusted execution environments in the Amber service itself. And every step is auditable by the customer if they choose to. So they can ask Amber saying, when you attested, when you verified my trusted execution environment, what did you capture as evidence? What did you verify? what services in Amber were used to verify this? What is the integrity assurance of those services? And we can give them a complete signed audit report that they can give it to their auditors and say, “when Amber gave me this verification, this is what was done.” Everything is traceable to the exact line of code that was used to verify. 

[00:13:05] Camille Morhardt: Hmm.  Right. So if you’re, if you’re an enterprise, for example, and you’ve been going with on-prem for certain aspects of your business–maybe you’ve migrated quite a bit over to the cloud, but there’s some particularly sensitive items that you’ve kept on-prem–you’re not going to go set up your own attestation service. So this would be kind of a,  potentially an option for you. 

[00:13:28] Raghu Yeluri: Exactly. Yep. And then the second aspect on this one is most enterprise customers don’t like to run in one cloud provider.  They want to run their workloads in multiple clouds. Some would like to work in Azure IBM and Google Cloud, for example. You don’t want to have a separate attestation service for each one of these clouds in which you are running trusted execution environments.  You want one service, doesn’t matter which cloud you’re running your workloads in trusted execution environments; you want a single uniform way of getting them attested and verified so that your operational workloads to talk to that service becomes very straightforward.

So today the state of the art is Microsoft gives their own attestation for it’s customers. Google will eventually have their own.  IBM has their own. And now if I’m a customer running in all of these, now I have to know, I have to understand, and I have to figure out how to manage everything that comes from these three different attestation systems–on top of the first problem, which I already mentioned, which is the infrastructure provided itself is giving the attestation, as well, which is not acceptable for many customers because of the separation of duties that they require. But if you’d say an independent third party service, you have one uniform way that all the clouds can actually interact with, so with that you have your attestation.

[00:15:03] Camille Morhardt: So once you said industry was already well-versed in dealing with protecting data while it’s in transit and while it’s at rest. And this kind of last frontier of confidential computing is protecting that data while it’s being processed.  And as we talked about, this is being processed within a trusted execution environment. And now Project Amber is verifying or attesting that trusted execution environment to be what it says it is. What’s next? I mean, are we done at that point? (laughs)

[00:15:34] Raghu Yeluri: No. I always tell people we are in the first or the second inning of confidential compute. So everything we are doing is doing for trusted execution environments that run on CPUs today. You can’t truly say something is confidential compute if that’s all you protect. 

Let’s say you are offloading a little bit of processing to, uh, AI model processing to a GPU or to some accelerators or an FPGA. How do you ensure that the trusted execution environments on those are exactly what they claimed to be and what you wanted them to be?

So you can now see, we need to extend the Amber functionality to devices–the GPUs, the IPUs, the accelerators, and potentially clients, you know?  If I am a client device that is trying to access the service in the cloud—cloud service running in a trusted execution environment–I need to verify my trustworthiness to the cloud service before I get access to that service. I could be a bad actor trying to access a good service that’s running in a trusted execution environment, and I can exfiltrate or infiltrator data from there. 

[00:16:51] Camille Morhardt: And so when you say client, I mean, you’re talking about everything from the computer, the laptop that I’m using at home to in the future IOT devices that may be sitting somewhere remote.

[00:17:00] Raghu Yeluri: Yep. You got it. Yep. So, so the roadmap for Amber or for any attestation service should be start with TEEs used today on CPUs–, because that’s where most of the industry focuses; but now extend that to GPUs IPUs, devices, eventually to supply chain.  You know, one of the big interesting dynamic is how do I know that the platform components are all trustworthy and they are from a compliant vendor that I trusted before I interact with that platform?

So there is something our supply chain attestation taking shape very fast. And, uh, uh, there is a lot of industry attention to that one, both from the standard side, from the Department of Homeland Security side, as well. So a natural evolution for a service like this is to provide attestation for the supply chain.

[00:17:59] Camille Morhardt: Very interesting. Raghu Yeluri, thank you very much. Senior Principal Engineer, Office of the CTO within Intel and Chief Architect of Project Amber. 

[00:18:09] Raghu Yeluri: All right, Camille. Nice talking to you. Take care.

More From