Skip to content
InTechnology Podcast

AI Policy and Implications for Enterprises (210)

In this episode of InTechnology, Camille gets into AI policy with co-host Taylor Roberts, Director of Global Security Policy at Intel, and guests Jason Lazarski, Head of Sales at Opaque Systems, and Jonathan Ring, Deputy Assistant National Cyber Director for Technology Security at The White House Office of the National Cyber Director. The conversation covers how current AI policy is taking shape around the world, how AI policies are influencing enterprise use cases, and the “system of systems” challenges to AI adoption.

Hear more discussions about recent AI policy:

To find the transcription of this podcast, scroll to the bottom of the page.

To find more episodes of InTechnology, visit our homepage. To read more about cybersecurity, sustainability, and technology topics, visit our blog.

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Follow our host Camille @morhardt.

Learn more about Intel Cybersecurity and the Intel Compute Life Cycle (CLA).

How Current AI Policy Is Taking Shape

Jonathan starts the conversation by highlighting three key sections of the recent U.S. AI Executive Order (EO) and how they address safety and security, advancing privacy and research development, and how the government is responsibly and effectively using new AI technologies. Jason gives his perspective on how businesses are navigating compliance with the AI EO and how they are facing challenges with fragmented data, new threat vectors, and the high cost and labor required for anonymization, masking, and tokenization. Taylor adds how the AI EO is a similar policy approach to other global regions such as the EU with the Cyber Resilience Act and the AI EU Act. Jonathan then emphasizes how at the policy level, it’s important to think of how new AI policy will set up industries for success in innovation but also with appropriate guardrails. As for collaboration between governments and across geographies when developing AI policy, Taylor points to the Bletchley Accords as one great example, while Jason shares how the International Counter Ransomware Initiative is using an Opaque workspace when pooling sensitive data from multiple EU countries.

How AI Policies Are Influencing Enterprises

The conversation then turns to how enterprises are adapting to new AI policies and standards like the AI EO, both in terms of using AI for security and ensuring security for AI. Jonathan explains how new technologies like trusted execution environments and trusted compute platforms are aiding in how governments approach policy-making, particularly through government-backed entities like NSF, NIST, or DARPA. Jason then shares how Opaque is helping its customers more easily adopt confidential computing to be in line with new standards, while Jonathan shares how the ONCD has been working on new approaches to ensure security and privacy. Camille then asks about example use cases for protected data in multi-party collaboration and when generative AI models are trained on protected IP. Jason outlines examples of financial services and HR use cases and how specific permissions need to be agreed upon when sharing encrypted data. Taylor adds how this type of secure data sharing is only possible with confidential computing, particularly in the context of AI. Taylor and Jason then elucidate how a containerized environment, or trusted execution environment, ensures that data remains in the container and allows for encryption in transit, at rest, and in use. 

System of Systems Challenge to AI Adoption

Camille inquires about the key elements in establishing trust in a compute infrastructure or data protection space. Jonathan explains the challenges around terminology, taxonomy, and definitions in technology spaces. He details how since the technology and its application, frameworks, and policy are all new, there has to be agreement on what everything means, providing NIST as a great example of approaching AI and security in a holistic way. In a similar vein, Taylor addresses social subjectivity when it comes to understanding AI and its surrounding terminology. When it comes to implementing AI technology and collaborative policies, Jonathan calls the situation a system of systems challenge, where groups will need to learn how to set up guidelines and adopt the technology. Taylor then concludes that agencies should be using new AI policies to better improve the security of their operation environments.

Taylor Roberts, Director of Global Security Policy at Intel

Taylor has been a director for security policy at Intel since 2020. In his role, he leads Intel’s cyber and supply chain security policy engagement with stakeholders from global governments, the tech industry, and standards bodies. Previously, Taylor served as Cybersecurity Advisor for the Office of the Federal CIO in the Office of Management and Budget for the Executive Office of the President at The White House. He was also a Senior Researcher at the University of Oxford’s Global Cyber Security Capacity Centre. Taylor holds a Masters of Pacific International Affairs from the UC San Diego School of Global Policy and Strategy.

Jason Lazarski, Head of Sales at Opaque Systems

Jason joined Opaque Systems, a confidential AI platform, as Head of Sales in 2023, where he helps businesses simplify, understand, and adopt new technologies in relation to AI. He is also an Executive Member of Pavilion. Prior to Opaque Systems, Jason led sales at Whip Around, GoFormz, Galley, MindTouch, and Leadfusion.

Jonathan Ring, Deputy Assistant National Cyber Director for Technology Security at The White House Office of the National Cyber Director (ONCD)

 

Jonathan has been with the ONCD since 2022, joining first as Director of Operations and Incident Response before taking on his current role as Deputy Assistant National Cyber Director for Technology Security. He has spent more than a decade specializing in leading teams to solve complex technical and organizational challenges and crafting cybersecurity policy for supply chain, AI, and advanced threat actors. Jonathan’s education includes a Master of Law and Technology from Georgetown University Law Center in addition to Bachelor’s degrees in Information Sciences and Technology as well as Security and Risk Analysis from Penn State University.

 

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

Camille Morhardt 00:12
Hi, I am Camille Morhardt, host of InTechnology podcast. Today we’re going to talk about Confidential AI and its policy. I’m co-hosting with Taylor Roberts, who’s director of Global Security Policy at Intel. He has also been cybersecurity advisor for the White House and Senior Researcher of Global Cybersecurity Capacity Center at Oxford University. Welcome to the podcast Taylor and who have you brought to the table.

Taylor Roberts 00:39
Thanks very much for the introduction, Camille. We have both sides of the policy and technology spectrum here in our esteemed guests. We have Jason Lazarski on the industry side, who is the Head of Sales at Opaque Systems. Jason really looks at simplifying the landscape of our confidence computing as it relates to AI, understanding how you can actually sort of transform some of these privacy enhancing technologies and apply them to the AI context that the policy aspects are then looking to address from either regulatory or incentive-driven or a variety of different mechanisms. Which leads me over to Jonathan, who is the Deputy Assistant National Cyber Director for Technology Security at the White House’s Office of the National Cyber Director, who amongst many other things really leads on the development and the implementation of the AI executive order, which was released earlier this year. So, a really excellent group of guests and looking forward to the discussion.

Camille Morhardt 01:37
Wow. I wonder if maybe we could just start with Jonathan. Could you give us an overview of some of the critical pieces of that executive order? And if companies are now scrambling, what are they scrambling over?

Jonathan Ring
Yeah, thanks Camille. Thanks, Taylor. As Taylor mentioned, the administration released a little over six months ago the AI Executive Order, which really focuses on outlining the administration’s efforts to both lead the way in seizing the promise of AI, but also of course managing the risks of the technology, as well. There are a couple of sections that I want to highlight first, really; Section Four, which covers safety and security, obviously an important topic that talks about standards development, best practices for deployment, and red teaming of AI, talks about the visibility foundational or foundation model security. It also talks about the use of AI in critical infrastructure and cybersecurity.

On the privacy side, these new primitives that are arising based off of the data aspect of this technology that we need to contend with in ways we’re not used to. Section Nine really talks about advancing privacy enhancing technology, research development and implementation that’s been a core theme of the administration’s efforts on this front. It looks at how you evaluate the effectiveness of those technologies and then also looks at how agencies specifically in the federal government collect and use commercially available information, especially that incorporates PII.

Then Section 10 of the executive order really looks at how is the government responsibly and effectively using this technology, right? What are the governance frameworks throughout the inter-agency, departments and agencies? How are we advancing responsible innovation with this technology? Then how are we managing those risks, especially when they start to touch on rights or civil liberties.

Camille Morhardt 03:40
I guess I’ll ask that second question to Jason, which is how are companies scrambling right now based on this EO that’s come out with so many different sections, and I know not everything is extremely precise in it. There’s a lot of guidelines and recommendations. So, what are companies struggling with right now?

Jason Lazarski 04:00
Yeah, what we’re finding here at Opaque is that there’s obviously a huge rise in appetite for data, but it’s absolutely colliding with security and privacy professionals. So, in terms of the enterprise and getting projects into production, they’re finding huge roadblocks and hurdles from getting those projects from a pilot phase into a production phase. What they’re finding quickly is that it’s not a problem with the AI or the model. It’s actually a problem with the data itself. So, as we talk to large enterprises–specifically security professionals, privacy professionals–there’s really three challenges beyond just the regulatory landscape that they have to worry about. One, their data is fragmented in a multiplicity of silos. So, within each silo of these large organizations, there’s different policies, there’s different procedures, there’s really even different definitions of what they determine as PII. So, there’s a huge cultural challenge that they’re dealing with. So how do you operationalize sensitive confidential data, which just so happens to be their most valuable data across these different silos.

Secondly, there’s actually a new threat vector, as well. So, with the rise and the sophistication of AI, it’s never been easier for these models to actually de-identify data. So now there’s a risk of the model itself de-identifying data and having your data leak that way. Then the third challenge that we’re finding, which is pretty common, is that just the traditional tactics of anonymization, masking, tokenization, they’re very laborious. They take a long time, it’s very costly and unfortunately the insights of the models and the accuracy of the models suffer, as well. So, for Opaque, our unique innovation is that we’re actually bringing the ability to process confidential data with end-to-end encryption.

Taylor Roberts 05:51
So just as a quick aside, there is also a broader, not just US-specific landscape that should be brought up here as well. So in the European Union, they passed the Cyber Resilience Act, but also the EU AI Act, the EU AI Act focuses a lot more on types of risks and prohibiting certain AI systems that could potentially pose specific types of risks, but it’s a market access related regulation. Same with the Cyber Resilience Act, which is a product security regulation focusing on restricting market access to those technologies and those products with digital elements that can meet specific security requirements and actually be able to attest to the security associated with those requirements.

And these sorts of policy efforts to try to better get a grasp on AI as it evolves, it’s an amazing opportunity as well as an amazing challenge, I think, because policy, at least in my experience, has often lagged with rate of technological innovation. But I think that the way that ONCD has taken the AI executive order, there’s a lot of acronyms by the way. So if I say an acronym, just let me know, the AI EO is good because it really takes a step back and says, well, how are we going to evaluate what it is we as government need to do? What is our role in this overall ecosystem? What are some of the challenges that industry can help us solve and specific attention to privacy enhancing technologies and ensuring that we are taking into account the security of that sensitive. As well, as Jason brought up, the security implications of some of these models, how adversaries can use some of these new techniques to exploit some of our systems. I think it’s an opportunity for us to work together in ways that when you have a more entrenched technology might be more difficult.

Camille Morhardt 07:47
I am interested in, I think different governments are going to start wanting to maintain IP essentially, both in hardware and in software in the development of AI. So rather than this technology being shared universally as it’s developed, I think possibly for the first time, this is a technology where the world might see it being developed very, very differently in one geography from another geography. I’m wondering is that happening? and how do we approach this from a policy perspective?

Jonathan Ring 08:23
I think that’s a great question. I don’t know that we’ve faced this sort of similar challenge in the history of technology development. There’s something so specific to the way that somebody decides to design and develop an AI system, which is so critical, I would say, because the data that’s being used is such a large input and moves the needle on the output of the system in such a significant way. And I think because of that reason, the US and the administration has been very forward leaning in making sure that we’re looking at this through a global lens–that we’re working, that we’re building coalitions with partners and allies across the world to understand not only how are they dealing with it, what are the challenges that they’re facing as they start to identify how they want to not just internalize this but externalize this technology. Then how are we aligning to make sure that the ways that people are approaching this are beneficial for each individual geographic region, for each individual political entity, but then also for the users of that technology.

I don’t know that we have a perfect answer to the question that you posed, but part of the reason for that is because the technology is still developing so rapidly, as well. It’s not just the application of the technology that we have to deal with, but the pace of technological change is rapid and continuing to increase. And so at the policy level, we really have to think through how are we setting ourselves up for success? How are we setting the industry up for success by crafting frameworks that are flexible enough to accommodate that sort of technological change, that rapid development, but also giving us the guardrails that allow us to shape the technology in a way that aligns with the values and institutions that we rely on every day?

Taylor Roberts 10:18
I just want to put a fine point on that. There are some specific points of evidence that you can point to. So in addition to the high level Bletchley Accords that took place at the tail end of last year where the US, EU and others and other strategic partners agreed to very high level sort of AI principles; there was also a more concrete document that was released I think a few weeks later, guidelines for the development of secure AI systems, which was a few US government agencies, the EU and Japan and Singapore and a few other different governments that all contributed towards this document on, again, it’s not specific controls; it’s really focused on the sort of secure by design principles that I think has been embedded in across many different countries, but focused and tailored specifically to some of the challenges of AI.

That’s not to say that everything is super harmonious and we have no discrepancies in international policies, but I think that the fact that from the sort of impetus at the latter half of last year that governments were saying, “we want to really get a better handle on it and we want to do it in a way that, at the very least it doesn’t conflict with one another so that companies that are looking to comply or are looking to innovate in this space aren’t severely restricted in doing so.”

Camille Morhardt 11:38
Can you just articulate what are some of the differences when you’re looking at security or security policy for AI versus other kinds of compute or software?

Taylor Roberts 11:50
Sure. It’s probably pretty emblematic of the broader policy style of both regions. I’d say that if you were to compare the EU AI Act and the AI EO—my alphabet soup of the day–you see that the AI EO is more looking at understanding the scope of the problem and working together with industry to try to put some broad guardrails on it. Whereas the EU AI Act focuses a lot more on market access and regulatory barriers. So less on the sort of more executive and strategic view and more on the restrictive market access controls. It’s also emblematic if you look at product security within both regions, right? The Cyber Resilience Act in the EU is a broad sweeping piece of regulation that covers a very large segment of technologies–products with digital elements as I mentioned before–and requires informed assessment processes where US has adopted a labeling approach, which is voluntary at this point and focuses on a more narrow scoping of internet of things devices. So, there is some harmony there, but obviously some differences in how the two regions regulate.

Jason Lazarski 13:02
So, on the ground floor, from a technology and use case perspective, at Opaque, we’re already seeing this, so we work with the International Counter Ransomware Initiative, which is actually started from a cybersecurity agency out of Italy. The use case there is they’re pooling sensitive data across 30 different EU countries in an Opaque workspace, running models against that data to find behavioral patterns on ransomware to prevent cybersecurity attacks. So, it’s something that we’re already seeing come into practice specifically over in the EU, which obviously stricter regulations is driving that.

Camille Morhardt 13:41
So, you’re also seeing AI be used for security as well as flipping on its head, right? We’re talking about security for AI and you’re saying, well, actually we’re using AI to detect potential threats?

Jason Lazarski 13:53
A hundred percent.

Taylor Roberts 13:54
When you look at the landscape of privacy enhancing technologies, Jonathan, are there specific challenges a) that the government is looking to address with some of its objectives within the AI EO? Then Jason, b) where do we feel privacy enhancing technologies are going in ways that can support policy objectives. And whatever order you choose to go in?

Jonathan Ring 14:14
I think it’s really important to not only to understand–and Jason will be really interested to hear your thoughts on this–not only to understand where they are, but it’s also our job to sort of forecast where we think the problem space not only exists, but where the solutions in the next five to ten years might be there to address that. I think trusted execution environments, trusted compute platforms is one of those examples. I think ten years ago, if you had looked into this technology, for a variety of reasons, it just wouldn’t have been practical or really operationalizable for a business based off of the use cases that we’re seeing. And in theory, it sounded great, but it just wasn’t really a mature solution from a technological perspective. But now ten years later, you’re seeing companies like Opaque and others who are really taking this and running with it based off of the advancements that have happened.

The government, I think is a key player in making sure that we realize that opportunity of the technology–whether it’s through NSF funding, which the EO talks about in sort of the privacy enhancing technologies area or NIST, who’s working on sort of standardization best practices, understanding gaps, and where industry is. You are looking at DARPA or some of the other national labs who have work that’s focused on this space. I think it’s an amazing role that the federal government can play and it’s one that we love to focus on, but the challenge there is understanding that forecast of the problems that this is going to solve and the actual realities or what’s within the realm of possible.

Jason Lazarski 15:53
Jonathan, I think you nailed it. The innovation of processing confidential data with end-to-end encryption has actually been around for decades. Two techniques that stick out–one is secure multi-party computation. The other is homomorphic encryption–they’ve been around since the late 1970s. What’s happened with those types of technologies though, in terms of compute efficiency, it’s been very, very low. In terms of ease of use, it’s been very, very low. So not really widely adopted at all, right? You’d have to hire experts in cryptography. You’d have to maintain those experts. You’d have to pay them over time. Very, very difficult to do.

Obviously now confidential computing is seeing a lot of momentum in the market. Confidential computing in general is actually very, very difficult to get going and to set up; there’s some gory details around key management, cryptography, remote attestation that customers and companies just don’t want to worry about. So, for us at Opaque, we’re putting the Easy button on this. We’re stripping out all those gory details and bringing to market more of a turnkey solution so folks can get access to insights on their confidential data in a couple of days, which we’ve seen as a complete game changer for the large and very large enterprise customers that we work with.

Jonathan Ring 17:11
To piggyback off that a little bit, I think one of the reasons that things like privacy-enhancing technologies are so relevant in this conversation around AI is, as I mentioned, sort of this new primitive, which was an old primitive, which is data and the impact that privacy and security of that data has on the ability of users to operationalize this technology, I think is new. From a cybersecurity perspective, from a threat vector perspective, it also opens up a whole new realm of possibility that attackers can use to attack a system. So that’s something that’s a good overlap when you talk of the security benefits of privacy or the privacy benefits of security.

But there are other really interesting technologies, whether you want to talk about, it was mentioned, homomorphic encryption, whether you want to talk about zero knowledge proof type things, ONCD has been doing a lot of work on formal verification, these sort of mathematical approaches to ensuring security and privacy and the technology is making strides and it has done so recently. So, it’s been great to have more of these conversations and apply them to the current problems we’re seeing.

Camille Morhardt 18:25
I want to take a look at two different use cases that are a little bit opposite. One, Jason referenced multi-party collaboration, and then the other one is more like generative AI or maybe GPT models that are being trained on very specific, highly protected IP within a company. So, one is like your own data and your own silo being trained to preserve that IP. Then the other one is how do I preserve my IP when I’m trying to collaborate with co-oppetition or even competitors to get more insights out of the data that we can all benefit from?

I’m wondering if one of you can give an example of what do you mean when you say multi-party collaboration? What does that look like? and how do we protect that kind of thing using this technology that you’re talking about?

Jason Lazarski 19:15
Yeah, I can start with that. So, we’re seeing a lot of different use cases within the multi-party collaboration. So, these use cases could be external to your organization–so you could share data across external partners, combine those data sets, run your AI/ML, just any basic analytics on those data sets. More often than not, though, we’re actually seeing that within an organization internally, and I’ll point to two specific use cases for that. One is in the financial services sector where we have clients who have a lot of challenges and trouble sharing data between let’s say a commercial lending team versus a payments team. And it goes back to the challenge around different silos have different policies, different procedures, different definitions. It takes months and months to months to get that data in a good place where they can actually share that data.

So, with confidential computing and specifically Opaque, the approach that we take is actually the opposite approach where you don’t encrypt that data at source or encrypt that data locally and then share that data. So instead of blacklisting that data, what you’re doing is actually white listing that data. So, the security posture that you’re starting with is “deny,” and then you’re actually choosing based on the folks who are in that workspace and sharing that data, what data you want to specifically share with that other party.

The last use case I want to highlight is actually in high-tech, we’re seeing a lot of HR use cases. So obviously with HR data it’s very sensitive in terms of payroll data, performance data, review data, a lot of exit survey data in the large enterprise; they’re actually having problems sharing that data between their different silos. So, picture data over here in Workday data over someplace else in the organization in a snowflake, right? No one is really allowed to access that data. So how do you share it in between those different sources? So again, a similar approach with confidential computing, you’re starting at an encrypted deny state, and then you’re choosing which columns and which attributes to share to then run those models against that data. So specific permissions have to be presented and agreed upon to actually share that data.

Jonathan Ring 21:26
I think from the federal perspective, one of the really exciting things is there are a lot of problems, particularly within cybersecurity. Jason mentioned cyber threat intelligence sharing. There are a lot of problems around sharing data that have some similar contours, right? Once we can understand what does the technology allow us to do in a new way, in a secure or private way that we didn’t have before, that unlocks a lot of opportunity for more visibility across an entire digital ecosystem–whether it’s through a software supply chain or whether it’s data that allows us to measure the outcomes of policies that we’re looking to enact. It’s really exciting, I think, looking at it from the policy side. And once we can sort of decompose these problems into understanding what the commonalities are across them, I think we unlock a lot of value of the technology and that builds momentum to allow it to continue to be commercialized, to continue to invest in it from the government side because people start seeing that value.

Taylor Roberts 22:34
One thing I’d add is that confidence computing, when you’re using the context of AI, as Jason was mentioning, you’re talking about encrypting that sort of data in use, you can then sort of activate hardware-based isolation and access control, which lets you do things around federated learning, attestation towards trustworthiness.

When you look at it from a policy context, you have organizations like HIPAA monitoring the data control of healthcare organizations and patient healthcare data cannot leave those very specific environments, but with the appropriate company computer controls in place, you don’t have to worry about that data leaving that specific environment. So, you’re able to have sector-specific applications, whether it be healthcare, energy sector, finance sector. I think there’s a bunch of different departments and agencies within the overall federal executive space that could benefit from applying these technologies. In the context of AI.

Camille Morhardt 23:28
Are you saying you don’t have to worry about the data leaving its space because it’s not leaving its space? You’re doing something like federated AI or are you saying you don’t have to worry about it because it facilitates protection of the data and use, even if the data is no longer physically in that location? Are you saying we’re creating a new virtual location that we can protect?

Taylor Roberts 23:50
Well, I’ll speak a bit and then Jason, feel free to jump in. From a federated AI perspective, you’re creating a containerized environment to ensure that that data remains within that container. So for example, if you’re worried that within your cloud operating environment, that you have specific data that you want to make sure that no one else has access to, you can spin up this containerized environment, encrypt it while it is in transit, at rest, and in use, so if you have more assurance than you ever really did before to know that that confidentiality and the integrity of that data is maintained and you can have a third party attestation service to help further augment that.

Camille Morhardt 24:25
Verifying that the hardware environment is what it says it is, and the software running within it is the software that you’re expected to have it run and you’re not having injections of additional data or withdrawals of data that you want to remain secure.

Jason Lazarski 24:39
You nailed it. That data is then isolated in a trust execution environment, like Taylor mentioned, and hat that brings to the table is the concept a hundred percent verifiable trust where every action now can be audited and verified, which is massive for these large organizations.

Camille Morhardt 25:00
What would you say are some of the–and maybe I’ll ask Jonathan this; to your mind, what are some of the key elements in establishing trust in a compute infrastructure or data protection space?

Jonathan Ring 25:14
I was actually going to bring this up. One of the challenges that we face often is really around terminology, sort of taxonomy, definitions, particularly in the technology space, and we’re seeing this a lot in the AI conversation is when we start to talk about security and privacy and even trust, trustworthiness–which has become a really important part of the discussion–we’re sort of borrowing terminology from what we know and what we’ve experienced in the cybersecurity context. So, one of the challenges which we continually have to deal with is as we’re discussing a new technology, new applications of technology or even frameworks or policy for setting up guardrails for a technology, we have to make sure that we really agree about what we’re talking about. I think trust is one of those pieces that is really hard to nail down, especially in an AI context.

The National Institute of Standards and Technology, NIST, has done a lot of really great work on making sure that when we talk about best practices for, let’s say red teaming an AI model or security assessments, that we’re talking about it in a really holistic way. If we talk about trust, it’s not just, can we verify the integrity of that data? or that piece of software has been maintained or the confidentiality of that has been maintained? B ut there’s this sort of socio-technical aspect that we have to incorporate and accommodate into these processes that is unique in AI context in a way that we haven’t really had to always account for it in a classic cybersecurity context.

Camille Morhardt 26:59
How do you do that?

Jonathan Ring 27:00
I think it’s a long process. Honestly, I don’t think it happens overnight. That’s part of the challenge that Taylor raised, which is that policy tends to be a bit of a lagging indicator from where the technology is because it takes a lot of alignment, a lot of understanding of the different perspectives across the community and the ecosystem because you can’t have a policy for every single person or a policy for every single organization. So, you have to really ensure that it’s a consensus process, that it’s not just the federal government who’s talking about this. It’s not just industry, not just the academic community or civil society, but that it’s really this multi-stakeholder international even process to sort of align on what we mean when we say “trust” in an AI context or trust in privacy, enhancing technology, trusted execution environment context, and really be clear about where to the extent that there are lines to draw, where we draw those.

Camille Morhardt 27:57
Yeah. I would be interested, Taylor, in your opinion also on this because it’s like you’re adding another layer of social or subjectivity, which I think you always do, I suppose in policy, and when you’re taking a stand as to what needs to be revealed or protected, but it seems AI may have yet another level of that to it.

Taylor Roberts 28:21
You take into the social aspect of it, but then also at least when you’re looking at it from a federal level, you’re looking or even a critical infrastructure level, you’re looking at what some people call mission-critical functions or what are you using them for, right? It’s going to be so wildly different depending on your organization, the sector you are in, the sorts of customers you have, whether you’re citizen-facing, and the scale at which you can impact innovation and the speed at which you can impact innovation has just augmented to such a degree that it introduces a large amount of challenge, but also some really interesting use cases. Jason, you’ve mentioned a few, but there are some really great use cases out there for where you can apply some of these technologies in ways that may not have had as demonstrable of an impact before the advent of some of these technologies.

Camille Morhardt 29:11
Yeah. Jason, are you struggling with helping customers implement AI differently in different geographies? Because we have, as we keep referencing, I know there’s some collaboration, but we have actual different acts and legislation and executive orders in different countries around the world that don’t necessarily agree.

Jason Lazarski 29:30
Great question. We’re still learning things and we’re still finding things, but absolutely, we’re getting waves of inquiries from India, from Europe, as things are changing over there and they’re in learning mode. They’re not too familiar with confidential computing. So going back to that education process, it’s a lot of educating and teaching them that we exist. So again, it’s pretty early stages with that.

Jonathan Ring 29:55
And maybe to build off of what Jason just brought up, I think it is important–especially for us on the policy side–to understand that this is going to be a process, like I said, not just because of the sort of rate of change of the technology, but as with any complicated issue, this is a system of systems challenge, right? As this technology starts to be implemented in the real world, they’re going to be not just emergent properties of, let’s say the AI models themselves, but emergent issues that arise based off of these interactions between these really complex systems. So, it’s important for us not to take our eye off the ball.

The pace of work or attention or the conversation hasn’t slowed down. In fact, it has probably only accelerated. Everybody is focused on not just how do we set up these guidelines?, but then how do we actually adopt this technology? People actually adopt the technology that they’re not scared of deploying it because we understand that you’re only going to be able to tease out some of these issues to identify is this policy actually sort of the right approach if we engage with it and try and implement it for our purposes.

Camille Morhardt 31:09
So, what keeps you up at night, Jason?

Jason Lazarski 31:11
I think the generative AI and the sophistication of those models, creating another attack vector in that thread, I think that’s scary for everyone, to be quite honest. I think quickly as people learn that those traditional tactics of masking and tokenization and anonymization, not only are they a pain, but they’re pretty much obsolete, is scary for everyone across industries.

Camille Morhardt 31:36
Taylor, any final thoughts?

Taylor Roberts 31:38
I want to keep all of this in mind when we start looking at future priorities when it comes to security policy. This administration has done a very good job, as I said at the beginning of this discussion of ensuring that industry is engaged when it comes to better understanding of overall AI security policy landscape; but also ensuring that those capabilities that can really drive change, we need to make sure that they are accessible at the end of the day. We need to make sure that agencies are able to use these to better improve the security of their operating environments. So, we just want to make sure that we keep the foot on the gas as much as we can.

Camille Morhardt 32:18
Thank you, Jason from Opaque. Jonathan from, oh my gosh. You’re going to have to repeat the name of the extremely long acronym that you represent.

Taylor Roberts 32:28
ONCD.

Jonathan Ring 32:29
The Office of the National Cyber Director.

Camille Morhardt 32:32
Okay. Taylor, who is chief, a Director of Security Policy at Intel. Thank you all so much for your time. Very interesting conversation.

Jonathan Ring 32:40
Thanks Camille.

Jason Lazarski 32:42
Thanks Camille.

More From

Scalable, At-Home Diagnostics: How SiPhox Delivers with Silicon Photonics (214)

Mark Rostick Joe Moye Kevin Reid Beep Intel Capital autonomous transportation microtransit

Beep’s AI-Enabled Transportation Solutions: Connecting Communities and Extending Mobility Access to All (213)

What That Means: Net Positive Water (212)