Skip to content
InTechnology Podcast

Machine Identities: How Machines Authenticate Each Other with Generative AI (157)

In this episode of InTechnology, Camille and Tom get into machine identity with Kevin Bocek, Vice President of Security Strategy & Threat Intelligence at Venafi. The conversation covers where machine identities are found, how they fit into coding with the help of generative AI, and the many growing regulations around machine identities.

To find the transcription of this podcast, scroll to the bottom of the page.

To find more episodes of InTechnology, visit our homepage. To read more about cybersecurity, sustainability, and technology topics, visit our blog.

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Follow our hosts Tom Garrison @tommgarrison and Camille @morhardt.

Learn more about Intel Cybersecurity and the Intel Compute Life Cycle (CLA).

The Many Different Types of Machine Identities

Kevin explains how machine identities are everywhere and are at the core of everything in computing, and there is a wide variety of them. Every device, operating system, software, application, and so forth has unique types of identification to ensure they properly communicate with each other. He compares these machine identities to human forms of identification like passports and driver’s licenses, which are issued by different authorities and have set expiration dates. In the case of machine identities, they are usually issued by the business that runs the machine or some other authority. Their purpose is to help with authentication and secure operations. One common example of a machine identity is a TLS certificate, which ensures user safety when web browsing.

Coding with Generative AI

With the recent rise in coding with the help of generative AI tools like ChatGPT, there are concerns about the security and stability of AI-generated code. Generative AI is making coding faster, but cybersecurity teams still need to make sure all code comes with an identity before implementing it. This makes developers now more like architects who direct and oversee the writing of code by generative AI. Their jobs are now to both write code but also ensure the alignment of AI with cybersecurity standards, including machine identities.

Regulating Machine Identities: Balancing Zero Trust with Root of Trust

The government plays two important roles in regard to machine identity, according to Kevin. The first is improving cybersecurity regulations to put more responsibility on software developers rather than consumers. The second role is by preparing for a post-quantum computing world. Kevin highlights how once quantum computers are able to crack current cryptography, then the machine identities within everything we use today could also be broken. Government agencies like NIST, NSA, and CISA in the U.S.—along with those in Europe like BSI in Germany—are working to raise awareness of preparing for a post-quantum world.

On the other side of regulating machine identities are the concepts of zero trust and root of trust. Kevin defines zero trust as always authenticating everything, whereas root of trust determines from the beginning whether something is good or bad. Authentication is a straightforward yes or no, while trust is more complex and subjective. Thankfully, root of trust can always be revoked if security concerns change. Keeping computing secure relies on balancing zero trust and root of trust practices and policies in a constant process of authentication.

Kevin Bocek, Vice President of Security Strategy & Threat Intelligence, Venafi

Kevin Bocek machine identities generative AI

Kevin Bocek is a seasoned cybersecurity strategy executive. With over two decades in the cybersecurity industry, his range of experiences includes threat research, technology ecosystems, identifying market trends, product marketing, sales enablement, and public relations. He has been with Venafi since 2012 serving in different Vice President roles and is currently the Vice President of Security Strategy & Threat Intelligence. Kevin has earned an MBA from Wake Forest University and a B.S. in Chemistry from William & Mary.

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

[00:00:28] Tom Garrison: Hi, and welcome to the InTechnology Podcast. I’m your host, Tom Garrison. With me is my co-host, Camille Morhardt. And today our guest is Kevin Bocek. He’s Vice President of Ecosystem and Community at Venafi, a cyber security company based in Salt Lake City. Although Kevin has lived and worked across Europe and Asia and the US, he is a US native and he’s recognized as a leading expert in machine identity management, threat detection, encryption, digital signatures, and key management. So welcome to the podcast, Kevin.

[00:01:06] Kevin Bocek: Thanks, Tom. It’s great to be here with you and Camille.

[00:01:08] Tom Garrison: We’ve spent a lot of time talking about identity, but it was mostly about the user identity and is the person sitting in front of the pc, for example, the person they claim to be. But today we’re going to talk about machine identity. And I noticed preparing for this show you had a role with something that we all read about in the news and that was machine identity sort of scenario with Hillary Clinton’s email server. So I wonder if you could help us understand machine identity in the context of something we probably all read about back in the day.

[00:01:45] Kevin Bocek: Well, we all know machine identities anyway in our day-to-day life. It’s the way that when we log on to our bank, the little padlock and our browser glows. We know it’s safe, it’s trusted. It’s when we get an update in our car now, it’s the way that our car knows that an update is safe and trusted. It’s all because of machine identity.

So yeah, machine identity, just like for us as customers or team members says what’s good or bad, friend or for about any piece of software, cloud, device that’s out there. And back in the day when we were all really curious about Secretary Clinton’s email server, we were all interested in facts. And so, one of the things that we were able to bring as facts is understand when an identity for that server, that email server was made available online, which told us when it was being used or not for communication. Also, it told us something really interesting too, when was it available through a web interface, too. It furthered the conversation, which I think we’ve all learned in looking to improve cybersecurity. So machine identities at the core of everything as we go to the cloud to AI and more.

[00:03:00] Camille Morhardt: So you basically did forensics to figure out when a server was being used and when the traffic was flowing. You had another example of that, right? Can you explain just so we can get a little bit more of a broad sense of what kind of information we can glean from it technically? So you also investigated Equifax?

[00:03:19] Kevin Bocek: In this case, I think many of your listeners may know there was a significant breach that affected everyone around the world kinetically. We all had to get new credit cards. While that was due to a vulnerability in software, the one thing, the question was how were the adversaries able to go undetected for 200, 300 days? And researched by both the UK and the US governments identified along with Equifax, is that there were hundreds of machine identities that actually were expired and all of this threat protection system wasn’t working because machine identities, they allow us to know what’s trusted, also what’s private or not. When these were reactivated, when these weren’t expired, immediately Equifax could see what was coming and going including the adversary.

[00:04:09] Tom Garrison: Interesting. So I think most of us probably when we think about machine identity, we’re thinking about it at the machine level, but machine identity actually goes all the way down even to code segments. Can you talk a little bit more about that aspect?

[00:04:28] Kevin Bocek: The world is made up of all machines, as you say, Tom. I mean, it’s just not the compute where it runs, but it’s the operating system. It’s the software applications. Software applications run other applications. So multiple levels of machines running then of course have to communicate with each other. I mean, think about us right now, this video being transported, recorded over Zoom from one machine to another. They all have to know if they’re good or bad, friend or foe. So identity becomes really, really important. And then when you think about the future, which is AI-driven, how do we know our AI models, A, that have been trained are the same ones that we trained, and then B, when they’re running, that they haven’t been poisoned or some other way tampered with, and then C, when they talk to another machine, is it friend or for? All of that comes down to machine identity.

[00:05:27] Camille Morhardt: It sounds like you’re pointing to a future where you have sort of autonomous machine mutual authentication before they complete a transaction or share data or exchange information. Can you talk about what is a machine identity? I mean, it’s a complicated question when you ask it of a human. I don’t know if it’s simpler when it comes to machines, especially as we span software to hardware.

[00:05:53] Kevin Bocek: Well, just like us as humans, we have multiple types of identities. So I might have a driver’s license, a passport, other forms of identity. With machines, there are different types of identities too, whether you are a Kubernetes cluster or a cloud or piece of code that gets installed in your car. So there’re different types. As I mentioned, the one that probably we’re most familiar with is that one that turns the padlock on in our web browser. That’s something called a TLS certificate. And there are many others, but they have a lot of in common with human identities. So they expire. They’re issued by authorities. Just like I get my driver’s license from Florida, a machine identity comes from the business that run and/or some other authority. So a lot of commonalities in this machine world that we’ve brought from us as humans.

[00:06:50] Tom Garrison: I’m curious about this because so many of us have heard about ChatGPT and how people are using generative AI to do all kinds of interesting things. One of those interesting use cases is coding, writing code. And so the complexity associated with an AI model generating code from somewhere somehow and then understanding whether, “Is that good code? Is it bad code? Can I trust it? Can I not trust it?” There’s so many different levels of identity that are sort of nested into that one action. First of all, how do you keep it straight? And then second of all, I didn’t come up with that use case. That’s some that people are talking about. So how do you see the future evolving with this complexity in mind?

[00:07:49] Kevin Bocek: Well, first of all, as a cybersecurity professional, I’m going to say something you might not expect. There’s good news. So the good news is that every operating system that we run today, modern, so Windows, OS X, even when you run Kubernetes out to the cloud, there’s the expectation that software comes with an identity. So code, just like when you download an application, it gets installed on your desktop or your phone. There’s actually a machine identity that says, “This came from a source and it’s either good or bad.”

Now the opportunity at the end is that we’re going to get a whole bunch more code coming at us from generative AI. So that puts a bit of a burden on then cybersecurity teams, which is then they have to make sure that all code that runs comes with an identity. It’s actually not so hard. Again, I said it’s built right in, but yeah, there’s loads of different sources. I mean, anyone even who’s a non-professional coder can ask ChatGPT to build a PowerShell script to make a virtual assistant on your voice command and it will do it. So that opens up whole new realm of both code that gets run, but then what does that code do? Yeah, so those are some of the interesting, I think, new threats, unexpected, that we’re starting to see with generative AI. I mean, there’s more to come that we didn’t expect, didn’t plan.

[00:09:20] Tom Garrison: And if I could follow up on that, the thing that seems new to me is in the old days you had a coder somewhere who was sitting there and he or she knew what they were writing as they were writing it. But now in this sort of new world where you have an assistant or whatever, use generative AI to write code for you, as a coder, you don’t know what it created. You’re just sort of getting the output. And so I just wonder, are we evolving the role of a coder now not to being the person writing the code, but the one the verifying the code? That it does only what you wanted it to do and it doesn’t have these extra things that somehow crept into the code. Is that how the role is going to evolve?

[00:10:06] Kevin Bocek: Being a reformed application developer, I mean, there’s a bit of a joke, but not a joke. What’s the engineer’s best friend? It’s control C, control V. Then when we think about generative AI, it’s giving us code before we might have found that Googling, or back in the old days, of course we would’ve taken it from a book and typed it in. The developer still is accountable. Modern software development is no longer a craft where one engineer just sits down and types. It’s highly mechanized like a production line. So we’ve already had these modern production lines for software development. And the developer, yes, is now more and more the architect, making sure that what they’ve asked now the generated AI to produce aligns. That’s something that we’re going to hear a lot more about, alignment of AI. We might hear about it, especially in alignment of AI is, “Is AI going to maybe kill us?” But I think much more pressing and near term is just what we’re asking for actually aligned with the outcomes that we want, and that’s why software developers are still important.

[00:11:19] Camille Morhardt: So along similar question, I guess a lot of us have considered the internet to be, to a degree, anonymous, or cryptocurrency, different things like that to be, to a degree, anonymous. And I’m wondering, as we’re adding or checking machine identities, now is there a different kind of angle to this that’s undoing some of that level of privacy or anonymity that people are interested in?

[00:11:48] Kevin Bocek: What we’re recognizing is that in the world of zero trust, which many of your listeners would’ve heard about, they might actually be involved in, which really zero trust boils down to we authenticate everything, that identity becomes actually really, really, really important. And in a world where there’s only more and more machines, it takes a while to get 10,000 customers. It can take me about five seconds to spin up 10,000 instances of an application in Kubernetes. And in that type of a world, machine identity gets really important. And as well because we’re all talking about zero trust, that idea that everything is authenticated whether that’s a piece of code or whether that’s a cloud, we’ve got to have then machine identity. Which I’d say all of the listeners have probably encountered the problems that come along with this. You’ve probably encountered a website that says, “This site cannot be trusted.” That’s because actually something’s broken with the machine identity. Either it’s expired or it’s misconfigured because your browser is saying, “Hey, I can’t let you talk to that other machine,” and that’s something that we’re going to experience only more to the future.

[00:13:07] Tom Garrison: What is the role of government and what actions are they taking in terms of like NIST and directives that are coming out in that sense?

[00:13:17] Kevin Bocek: Yeah, so certainly I think the big active role in government right now is in two ways. So one in the US with loads of great effort to improve cybersecurity and as well to make sure that individual consumers don’t bear the burden. That’s one of the things we’ve seen the Biden administration bring forth. And putting more responsibility on software developers, which means then we need to have assurance of the software that we’re taking in. The software we’re building, AI models, that’s putting more burden on the identity of the code of the machines that we’re delivering safe.

The second thing is then of course, preparing for a post quantum world. When quantum computers are able to crack the cryptography that’s underlined everything, the machine identities allow you to go and make a payment when your payment terminal knows that it’s talking to MasterCard or Visa or when again you go online to check your bank account balance or you get a software update from your car, all of those underlying identities could be broken by a quantum computer. And so that’s something that NIST, NSA, other government entities like CISA, again, I mentioned in Europe with the BSI in Germany, all raising the awareness that we have to prepare for a post quantum world and that we have to start understand what’s out there. We have to get ready to change it.

[00:14:46] Camille Morhardt: The other thing that I’ve thought of always when it comes to identity is it’s a one on one kind of a thing. You mentioned there I might have a birth certificate, a passport or driver’s license, but they’re all verifying me to different capacities and by different authorities maybe in some cases.

[00:15:04] Kevin Bocek: Right.

[00:15:05] Camille Morhardt: But to Tom’s question around generative AI, if there is a version of me at let’s say two different decades, one, I’m in my… So we go forward into the future, one, you’re in your 20s, one, you’re in your 40s, and there’s an identity that’s of a human but it’s now existing in a digital form and it can continue to generate content. What is that? Is that a machine identity? Is that a human identity? Is it some combination? How are we classifying that kind of thing?

[00:15:30] Kevin Bocek: It’s a machine identity. And certainly it’s something of interest as we think about agency and AI. AI engines taking action on your behalf. We’ve already in some ways been doing that much the way in… In business certainly, I mean, AI has been taking action on behalf of people. But now I think we’re really going to experience it personally where whether it’s our personal assistant or otherwise it’s going to take actions based on our interests or just knowing us, predicting. And so that model, that AI engine will have a machine identity. It may have one that is unique for us. Again, the more unique that we can bring an identity, the better. And yeah, that’s a machine identity. Because we use our human identities to log in, our username, password. Or I’ve heard recently now we’re getting to a password-less future of past keys that allow us, again, as humans to tell the machine it’s us, and then machines take over.

So that machine that said, “Yep, that’s really Tom.” Then that machine goes on and tells another machine, tells another machine, another machine. “I knew Tom. Taking action on Tom,” but of course that isn’t Tom. That’s a machine, and ultimately that’s some type of machine identity, which puts again a burden on businesses to the future to get this right, because what we’ve seen is the adversary catches up fast. It’s one of the things what made Stuxnet so violent? Remember over a decade ago, we all heard about Stuxnet was infecting the world. It was originally intended to infect Iranian nuclear facilities, but it came with a machine identity, in this case technically something called a co-design certificate, that made it look like a piece of graphics driver software, which all basically Windows computers would run. And boom, it exploded. Again, all comes down to how we going to protect those identities. The machines are going to do work on our behalf.

[00:17:51] Tom Garrison: There’s another term that’s floating around. And those of us that sort of live the world, we just inherently get it. But I think some of the listeners may have heard it not understand it, and that’s a root of trust. Can you just define it, and also in the context of machine identities?

[00:18:10] Kevin Bocek: So root of trust, it ultimately says that there is a beginning of what is good or bad, the root basically of good. And then from there on we can have inheritance of what is good going all the way back to that root or start. And in a digital world, that’s really important because even today with an AI engine, there’s some action of a human that starts that. So we have to have a root by which we say, again, something is good, and that digitally is a root of trust. It becomes a machine identity that says, “Yep, I can issue another identity, issue another identity, issue another identity, issue another identity.

[00:19:02] Tom Garrison: Yeah. The reason I’m saying this is it is confusing when you hear, okay, root of trust and how important it is, and then you hear that on the next sentence, “Oh, we’re going to a zero trust model.” Zero Trust model isn’t actually zero trust, there is still an element of trust that’s…

[00:19:18] Kevin Bocek: Right. I mean, that’s… Yes.

[00:19:20] Tom Garrison: Bringing those together in a way that people can understand I think would be helpful.

[00:19:24] Kevin Bocek: Yeah. Yeah. So zero trust again for me is just always authenticating. Trust is of course a subjective term. Authentication, of course, we can say is binary. “Did we authenticate Tom at 6:00 PM? Yes or no? Do we trust Camille?” That’s a different question. That’s subjective. And so in a binary world, it comes down to this route where we say, at some point, “This was good. We authenticated it.” And then as it operated thereafter, it was authenticating other machines. So it was the root. We said it was good, and then it was allowed to authenticate others to also hear that as chain of trust.

But yeah, basically we can go all the way back up and say, “Yep, originally we authenticated that” and we can go all the way down. Again, much like probably many of you listeners think about a blockchain. There was initial transaction, and then all those subsequently follow. One thing of course with root of trust that’s important too is that we can revoke it. If for whatever reason an adversary takes an action, we can revoke it, which of course then means we’ve got to replace it. So in a digital world where we’ve got to have binary authentication, these roots of trustees, identities, issuers, we need them, but we also have to replace them quickly.

[00:20:59] Camille Morhardt: I think it’s funny that you said, “We authenticated Tom at 6:00 PM” because I think that’s something that NIST recommends too. And the zero trust architecture is this notion of continual authentications. It’s not just a, “I authenticated Tom at 6:00 PM and I’m done for the next two decades.” It’s like, how frequently do you want to authenticate that?

[00:21:22] Kevin Bocek: Right. Right.

[00:21:23] Camille Morhardt: And that’s something any organization can set according to their own desire.

[00:21:27] Kevin Bocek: Right. And in the NIST zero trust, and I think CISA has updated, that’s 2.0, you’ll see that there’s not just, of course, Tom, that we’re authenticating, but it’s everything that happens thereafter, because of course the value that we create in the digital world happens amongst machines. Many machines that of course we don’t see or understand. Each one of those in a zero trust world also has to be authenticated. That’s one of the things that we’re working to get away from, which those of us would’ve been in old-fashioned networks where we had one connection between one connection and they just always worked just because. Now we have to say, “Hey, we always have to authenticate because the machine on the other side might be operating in a cloud on the other side of the world. We’ve never seen it, never touched it. Is it good or bad?” It all comes down again just to an identity. Just like when we authenticated Tom.

[00:22:28] Camille Morhardt: Just to be clear then, you would authenticate Tom, you would authenticate the device he’s logging in from, you would authenticate the application he’s using, which may be cloud-based, and then you would authenticate perhaps the cloud or cloud environment where processing is occurring also?

[00:22:47] Kevin Bocek: Right. When we think about an application, whether I’m interacting with something like Office 365, salesforce.com, Zoom, there’s just not one application. It’s many applications that are running. We don’t see them, but they’re all working together to produce the experience. And of course that’s too, as we think about in a world of generative AI, AI agents, there’ll be a model that will be running and compute across multiple clusters, so authenticating between itself. When that AI talks to another machine, how does it know if it’s authenticated or not? Again, that comes down to another machine identity. So this is something for us as humans it’s a bit hard to get our heads around us because we think about people. I can see Tom and Camille, but the idea of the machines, the possibly hundreds that are behind the scenes, that’s a lot harder for us to fathom.

[00:23:53] Tom Garrison: Well, Kevin, I think we’ll leave it there. This is one of those topics that the more you sort of peel at the end, you realize that it is all around us and almost every aspect of our lives that is on online, and yet we don’t really talk about it that much. We talk a lot about physical identity, but not really about machine identity, but it’s so, so critical. So thanks for spending time with us today and we look forward to seeing where this space lends itself over the coming years.

[00:24:21] Kevin Bocek: Stay safe. See you.

More From