Skip to content
InTechnology Podcast

Deep Dive: US Executive Order on Artificial Intelligence (181)

In this episode of InTechnology, Camille explores the recent US Executive Order on artificial intelligence with Divyansh Kaushik, Miranda Bogen, and Chloe Autio.

Divyansh Kaushik is an Associate Director for Emerging Technologies and National Security at the Federation of American Scientists. Miranda Bogen is the Director of the AI Governance Lab at the Center for Democracy and Technology. Chloe Autio is an independent AI policy and governance consultant. They explore the impact of the Executive Order on AI innovation, foreign governments & companies, and the private sector.

To find the transcription of this podcast, scroll to the bottom of the page.

To find more episodes of InTechnology, visit our homepage. To read more about cybersecurity, sustainability, and technology topics, visit our blog.

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Follow our hosts Tom Garrison @tommgarrison and Camille @morhardt.

Learn more about Intel Cybersecurity and the Intel Compute Life Cycle (CLA).

Controversies in the US Executive Order on AI: Unpacking Unanswered Questions

As the recent executive order on AI unfolds, there are controversies regarding its implications and execution. Divyansh explains that one notable aspect is the directive to explore the impact of AI on the workforce. This includes the thorny issue of workforce compensation in the event of job displacement. He emphasizes that as technology advances, the use or impact of AI in the workforce in the future may differ from what we see today. 

Divyansh adds that another concern is the lack of clear definitions in certain reporting requirements. For instance, can the government halt the release of a company’s models if it deems their measures inadequate? Also, US cloud providers like AWS are required to notify the United States authorities if foreign persons are training a dual-purpose AI model. 

While this requirement may apply only to projects intended for commercial use, Divyansh believes that its plain text interpretation could also apply to foreign individuals. He points out that the latter contradicts section 5 of the Executive Order, which centers on attracting global talent. 

Implications of the US AI Executive Order on the Private Sector and Foreign Government

Miranda says that there are two major potential implications for private companies, particularly those planning to or already making cutting-edge technology. The first is the reporting requirements, the extent of the US government’s intervention, and the potential oversight challenges it may pose. The second is that the US government is a major contracting source for the private sector. Through the Executive Order, the government aims to define the requirements vendors developing AI for the government should meet. Miranda expects that these potential requirements would influence the standards AI contractors will operate with.

The Executive Order would also impact national security, according to Divyansh. There are strict controls on semiconductors and chip exports to countries like China. Despite this, foreign persons in those countries can use American Open source AI models for various purposes. He believes this negates the purpose of chip controls. 

Chloe believes that foreign governments will react to the US Executive Order. She explains that this is the most robust action that any government anywhere has taken to date on Artificial intelligence. It sets the pace for AI requirements globally. Therefore, foreign governments may cautiously position and react toward these requirements.

Divyansh Kaushik, Associate Director for Emerging Technologies and National Security 

Divyansh Kaushik US Executive Order on Artificial Intelligence

Divyansh Kaushik is the Associate Director for Emerging Technologies and National Security at the Federation of American Scientists. He is also an Advisory Council Member at the Krach Institute for Tech Diplomacy at Purdue. Before this, he was a Research Intern at Salesforce and an Applied Scientist Intern at Amazon Web Services (AWS). Divyansh holds a Ph.D. in Artificial Intelligence as well as an M.S. in Language Technologies (Artificial intelligence) from Carnegie Mellon.

Miranda Bogen, Director, AI Governance Lab

Miranda Bogen US Executive Order on Artificial Intelligence

Miranda Bogen is the Director of the AI Governance Lab at the Center for Democracy & Technology in Washington DC. She focuses on developing and promoting the adoption of robust solutions for the effective regulation and governance of AI systems. She formerly worked as the Policy Manager for AI/Machine Learning at Meta and then, as the Policy Lead for Fairness & Equity. She holds an M.A. in Law and Diplomacy from The Fletcher School at Tufts University. Miranda Bogen earned her B.A. in Political Science at UCLA and was a Cross-Registrant at Harvard Law. 

Chloe Autio, Independent AI Policy and Governance Advisor

Chloe Autio US Executive Order on Artificial Intelligence

Chloe Autio is currently an independent AI policy and governance consultant located in Washington, D.C. She offers her expertise to top AI and tech entities, alongside government and civil society groups, focusing on AI policy and supervision initiatives. She formerly held the position of Director of Policy at The Cantellus Group. Before this, Chloe headed public policy at Intel, serving as Director of Public Policy, and progressing from her earlier roles as Public Policy Manager and Analyst. Chloe earned her B.A. in Economics from the University of California, Berkeley.

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

Announcer  00:00

You’re listening to InTechnology, your source for trends about security, sustainability and technology.

Miranda Bogen  00:12

Looking at the last decade of technology, I think everyone regrets a little bit that we weren’t more proactive and had more foresight around what could go wrong. And we’re trying to clean that up now. And we’re trying to get ahead of that when it comes to AI.

Camille Morhardt  00:29

I’m Camille Morhardt, host of InTechnology podcast and today we’re going to do a deep dive on the recent US Executive Order on artificial intelligence. I’ve got three experts here with me today, I have Divyansh Kaushik and he is Associate Director for Emerging Technologies and National Security at the Federation of American Scientists. He has a PhD from Carnegie Mellon in artificial intelligence. Welcome Divyansh. 

Divyansh Kaushik 00:57

Glad to be here. 

Camille Morhardt  00:58

We also have Miranda Bogen, who is founding director of the AI Governance Lab at the Center for Democracy and Technology. And she previously led elements of artificial intelligence and machine learning policy for Meta and has also worked in civil society in various roles. Welcome. 

Miranda Bogen  01:19

Thank you for having me.

Camille Morhardt  01:21

And we also have back on as a guest, Chloe Autio.  Great to have you back. She is an expert on artificial intelligence policy. And she has also developed governance programs for various companies for artificial intelligence. Welcome, Chloe.

Chloe Autio  01:35

Good to be here again, Camille. 

Camille Morhardt  01:37

So this US Executive Order just came out. Can somebody please just give like an overview.  What is this executive order? Why is it an executive order as opposed to legislation and what’s sort of driving the timeframe for it? 

Divyansh Kaushik  01:52

Congress is still working on its AI policy framework legislation. Several pieces of legislation have been proposed in Congress, but Congress has not enacted anything around governance or innovation in AI this year.  The President has the authority to issue executive orders to ask agencies to do things within the existing legal authorities that Congress has provided him. So the President has, for instance, asked several agencies to clarify certain guidelines or certain laws on how they apply to artificial intelligence, or he has asked certain agencies to engage in rulemaking process on changing some of the ways they implement existing laws. 

So obviously, this order has a lot going on. It’s tackling innovation; it’s looking at security and safety; It’s looking at workforce, it’s looking at data privacy, it’s looking at a lot of things.  What the executive order, you know, it’s communicating very clearly that the long term development of this technology requires public trust, because we are kind of living in a moment of general techno pessimism. And that’s, in my view, why the President has taken this initiative to not wait for Congress to pass legislation, which may not even happen this Congress, but to use existing authorities that he has to ensure that the way the technology is being developed is ensuring a prosperous future for everybody, not just a select few. And it is not harming our national security. So that’s my view, I’m sure Chloe, Miranda, you have more to say on that.

Miranda Bogen  03:36

One important thing to remember is that this administration has been working on the topic of AI for quite a while.  They delivered the blueprint for an AI Bill of Rights, which had a lot of really substantive recommendations around the rights that Americans ought to enjoy when it comes to AI and really dug into what does that look like and implementation as well. The challenge with that was that it’s not a binding policy vehicle. It couldn’t require agencies or, or the private sector to take any particular action. 

But that was I think, last fall, and in a very different era of AI policy conversation than we’re in today. Late last year, when generative AI became all the rage, there was quite a bit of attention on some of the even further down the line risks that might come from this technology. There was a lot of activation around the world have a discussion around what regulation might look like. And I think that put an impetus on the administration to add more teeth to its interest in the topic where possible. 

I agree with Divyansh that legislation is going to take a while. They haven’t, you know, Congress hasn’t even passed comprehensive privacy legislation and one would think that would be a foundation for regulating AI data and we’re still working on that. And so I think that’s why the administration and really sprinted to put together this really voluminous and comprehensive order, I think it’s one of the longest and most comprehensive orders, executive orders that that an administration has put out on, especially a technical topic. And so that really just speaks to the fact that so many people are seeing this topic as critical for someone to address. 

Camille Morhardt  05:17

So, I do want to come back around to like Divyansh was saying, breaking it up into the categories of innovation or talent or security society walking through those; definitely want to look at private sector implications. Is there anything before we dive into those, Chloe, any takeaways that you have on it that we should be considering?

Chloe Autio  05:36

Yeah, I would just really broadly echo both of things that Divaynsh and Miranda have said, but really want to pinpoint the fact that none of this work is happening in a vacuum, right? A lot of federal agencies in absence, particularly of congressional legislation have come and said, “Here is where we understand our authority to be in this space. And maybe where we are able to bring some types of enforcement actions or different types of guidelines that can help industry or other organizations understand what the expectation is here.” But those have been really on sort of an agency or department specific basis. And I think that what this executive order has done is really provided a very strong, planting the flagpole of leadership and saying, “Look, you know, we’re going to coordinate and bring all visitors together, and do so also in coordination with the Office of Management and Budget,” which is, you know, sort of provides oversight and authority to the work of a lot of federal agencies. 

And OMB also released a memo in conjunction with the executive order this week to help different agencies guide and manage the risks that will arise with their own AI use, as well as giving them guidance on how to further adopt this technology. But I think the takeaway for me is just that there was a lot of work kind of happening on AI governance, AI policy at different agencies within each of their remit. And this executive order really kind of pulls that all together and outlines even in a very formulaic sort of section-by-section way, where new guidance, new reporting, new rulemaking might be necessary, and really builds upon those existing efforts.

Miranda Bogen  07:05

One thing I would add is just that, while there isn’t necessarily foundational AI-specific legislation, there are a variety of laws that do apply when it comes to AI and executive agencies have articulated their intent to use their authorities under those existing laws already. So some examples of that are civil rights laws which apply regardless of the technology that’s used. And so what the EO did was reinforce that and direct the agencies to continue doing that and to coordinate the share learnings and point out where there’s still work to be done.

Camille Morhardt  07:41

So Divyansh, do you have an opinion as to what is controversial? Or if there’s anything controversial in the EO? 

Divyansh Kaushik  07:49

Yes. (laughs)  So the administration, for instance, has like part of the executive order is asking Department of Labor to do a lot of things like write up a report about the use of AI and the workforce, or what should workforce compensation if your job is taken away by AI? or  the robot tax that a lot of people have been talking about. The technology is moving pretty fast. We don’t necessarily know how adoption next year will look like right now. And the other thing is, it’s not entirely clear what the administration wants to do with those reports. 

The other thing I would say like there are a lot of unfunded mandates in this. There are agencies like NIST, which have a billion dollar budget, but both the administration and Congress expect them to work like a $10 billion agency.  Decades of neglect for these agencies, has led us to a place where now suddenly they have to implement so many of these things, but don’t have the workforce.  Like there was an article in the Washington Post about how the team at nurse that is more or less responsible for the implementation of this EO has a staff of roughly 20 people; it’s making it hard for these agencies to actually recruit the people. And so that necessarily does not go in line with the things that the administration is saying, say in Section 10, where the administration is talking about increasing talent in the government. You’re not actually investing in the agencies; you’re not actually making them competitive with the private sector. And now you’re expecting that people will show up just because, “hey, let’s have some good impact.” 

Then the other thing, obviously, is, there are certain reporting requirements that some people have said are not really clearly defined. What would happen if the government judges that a particular company’s measures are inadequate? Does the government have any authority to stop the release of such models? Not clear. What if someone just puts them up open source; not sure what will happen.  Then there are other reporting requirements around like cloud providers, where it’s like, you have to notify a foreign persons are using your cloud services to train a dual purpose AI model. And the way actually, it’s written, I know, the intent is for it to be more commercial use that somebody is using it commercially, the agency should be notified about it. But the way it’s written a plain text interpretation could be seen as say, someone at an IIT in India is working on a large language model. Well, AWS has to notify the US government that this person’s working on a large language model. And that goes against the intent of Section 5, which is about attracting global talent. So it’s there are certain areas where more clarity is needed, more money is needed more intent from the White House is needed.

Camille Morhardt  10:46

I want to go back to Miranda’s comment about I think you said there were two major implications or potential major implications in the private sector. So I just want to jump to that and tell us what those are, if you could.

Miranda Bogen  11:01

I think what Divyansh was talking about with one of those implications.  So for companies who are developing that cutting edge technology, if they not only are building, but if they are contemplating building technology of a certain level of advanceness, they are implicated in those reporting rules. And I think that’s one of the things that raises questions, as demand was saying of their invoking the Defense Production Act in a pretty broad way in a communications technology that’s requiring reporting at a certain level. And so it doesn’t seem like many people have really thought through the implications of that. And certainly, compared to the rest of the order, which is really robust. And we’re, you know, we really welcome the thoughtfulness and the depth and the engagement that went into it. It didn’t seem like that section had as much engagement with civil society and with stakeholders who might think about what equities are implicated by such a requirement. 

So that’s one thing we’re going to be keeping our eye on and looking at.  The Know Your Customer is another one. So companies may already have obligations within financial context. But there are potential surveillance implications here of when companies need to be reporting information to the government that would have raised pretty big flags like a decade ago, and certainly a few years ago, as well. And question now is, does the concern about the capabilities, quote, unquote, this new technology changed the calculus of when we might be worried about government approaches such as this.  Again, we want the government to incentivize responsible development of AI technology, not just for the most advanced models, but also for models of today. But we also want to make sure that the way that we go about that is going to have the impact that we want, and not have externalities on people’s rights, or at least that we’ve carefully weighed those things. So again, the conclusions here aren’t fully clear, but it’s something I think, to pay attention to and to continue having conversations about.

Camille Morhardt  12:58

When you say surveillance, that would have raised flags ten years ago, but now it’s kind of being written or captured in here, what are you talking about?

Miranda Bogen  13:05

So post-9/11, I believe that there was a big push for reporting of financial transactions of above a certain amount. And at the time, there were lots of concerns within civil society around the type of information that would then be reported to the government and how that information would be used–especially when it came to folks who are not US citizens; and how governments were trying to collect a lot of information that they would then use in sort of intelligence operations and other situations. And you could imagine similar cases here; the act of using cloud service providers to train AI models, is not per se, something that’s risky or deserving of intense scrutiny by a government who is not your government.  That raises questions here, you know, under what circumstances does working on AI that is so cutting edge auger for that type of intervention? I think we don’t know yet. I think the thresholds that were set in the executive order, we understand they’re the result of some consensus among experts around today’s models capabilities are today’s models scale, and what the next generation of advanced AI models are. But I’ve worked with engineers, as I’m sure many listeners of the podcast have, and when you give them a limit, they are incentivized to do everything possible to work within that limit so that they don’t have to do a whole lot of extra work. And so any sort of setting of technical thresholds as the way to dictate when more government scrutiny is needed, or certain reporting will have that sort of limitation that people will try to figure out how to do the work they want to do without kind of triggering those requirements. So that’s more of a practical potential limitation.

Camille Morhardt  14:48

So you said there were two. So what’s the second implication for the private sector?

Miranda Bogen  15:14

One role that the government plays is that it’s one of the largest employers, and it’s also a source of a lot of contracts with the private sector. And so the Office of Management and Budget draft guidance that came out this week in concert with the executive order is a way that the government is trying to define what requirements should it look for in vendors that it uses to develop AI technology. And so while that’s still draft guidance, and the government is seeking feedback on it, there will be some expectations that the government sets for businesses that are trying to develop technology for the government. And I think that is a tool that governments tend to use to set standards for the private sector in general, because of the breadth of businesses that do contract with the government. And it’s something that the executive branch can do unilaterally. Whereas you might need Congress to do something that affects the economy more broadly. But for businesses that are looking for a signal on what is expected of them in developing AI responsibly, this is a good barometer to see what is coming down the line and where the expectations already are, even if the requirements aren’t there.

Divyansh Kaushik  16:01

Can I just add to that, I think like there are a couple other things, too. There’s a Request for Information that’s going to come out from the Commerce Department on the potential benefits, and harms of open source models around the technology, visual impact a lot of companies, especially startups, for whom open source models are a way to level the playing field with big corporations. But at the same time, it also has implications for national security; were implementing very strict semiconductor and Chip control export controls on China. And so where does open source AI lead us, right? Like, does it just negate the purpose of those chip controls where those countries could just use our open source models for the purposes of all the things that they’re doing in their own countries about surveilling minority populations on conducting persecution? So that’s one, and the section 5.1 it’s massive far the industry. Industry has been talking year after a year about lack of access to high-skilled talent. The administration said, “great, you know, we know Congress hasn’t done anything since 1990. On immigration policy, but here are the levers that we can internally pull about streamlining some of the laws that exist on the books, how do we modernize H1V pathways for entrepreneurs? or provide more certainty, provide more clarification around the O1 Visa for AI talent.  Or do a Schedule A update. Like for instance, Schedule A Shortage Occupations List has not been updated in four decades.  There are two occupations on that list, nursing and one more. That list determined what the Department of Labor says our shortage occupations for which there are not enough skilled workers in the United States. Right now, if your occupation is not on that list, you have to spend 12 to 15 months through a perm process, which is a labor market certification test. Now, essentially just cut that 12-to-15 months, and that eases the requirements to hire people so quickly, right. But overall, I would also add, like, there’s one more implication for the private sector, which is in the definition of dual-use. That’s the administration saying that AI is now part of the national security apparatus, whether the industry likes it or not, right.

Camille Morhardt  18:24

Right.

Miranda Bogen  18:49

Actually, one more body have implications for the private sector. They’re not quite as direct, but the executive order does tell the regulatory agencies to come up with plans for how they are going to enforce the laws that they are already empowered with enforcing. And that implicates every company that’s subject to that agency’s coverage. So the Federal Trade Commission is instructed to make sure that there’s fair competition in the marketplace; the employment agencies are instructed to make sure that employers are following Title VII of the civil rights law and a number of other things. So I think while it the specifics might not have been enumerated in the executive order right now, these agencies were instructed to come up with plans for what they are going to do with the companies that are within their jurisdiction. And so I think there’s a lot for companies to watch for in the coming months.

Camille Morhardt  19:19

What is the timeline for some of these expected next steps or next level detail?

Chloe Autio  19:27

I can talk to that. You know, each agency has different types of directives. Right. Some of them are opening a rulemaking some of them are coming up with a report; some of them are issuing new guidance. And I think that the dates Divyash correct me if I’m wrong is anywhere from about 30-40 days to a year. So we have a pretty long timeline for various initiatives like these to rollout.  And, you know, some of them may get extended, some of them, companies making extensions, that sort of thing. It really will depend on sort of the agency. But I think that the administration is really, really focused on getting this work done. Right. And so we’ll see how it all rolls out. But that’s the timeframe. 

Divyansh Kaushik  20:06

Yes. think there’s going to be an aggressive implementation of the EO, despite the 30-40 to one year timelines. I would anticipate that rulemaking comes out pretty soon, because the White House clearly wants final rules to be published before election, in case the administration is not reelected, that the rules are in place, that there’s some regulatory stability, that it’s not just that new administration comes in repeals the EO. Now you go back to square one.

Camille Morhardt  20:35

Does this EO have implications for foreign governments or foreign companies that are different? As an example, you know, like when GDPR came out in Europe, that had a lot of implications for US companies that had to start scrambling? Right. So is there any kind of similar implication for this towards other countries? 

Divyansh Kaushik  20:54

Yeah. The Saudi Arabian government wants to use AWS servers in the US to train the next generation of Falken models, the US government’s going to know. So that’s one of the big implications there. There’s a whole section on international engagement, the Section 11, where the US wants to develop shared global standards. But also, one of the underappreciated points in that section is directing the US Agency for International Development to develop an AI for Global Development playbook. US-AID has not modernized getting them to actually think about how AI can assist in global development, that’s very important were those provisions of the EO will impact other nations.

Chloe Autio  21:40

Yeah, I would just add really quickly to that the Administration has been very blunt about that, you know, this is the most robust action that any government anywhere has taken to date on AI. And that’s a pretty profound statement to make. And it’s true in a lot of ways, but I think that whatever the US does on AI here, foreign governments are always going to react, right. And so what the US government requires of large language model developer companies or cloud service providers may make foreign governments think, “well, is this something that we should do, too?”  But that all being said, I think that divide administration two has really been explicit about using this order to lean in on foreign policy as well. And I think that generally, this will be a major sort of vehicle for the administration in the US for global leadership. But the devil will be in the details. And of course, it remains to be seen sort of how different governments will cautiously position and react toward these requirements and how they roll out.

Miranda Bogen  22:35

The backdrop of all this, of course, is that the EU AI Act continues to go through negotiations. And that will be another key moment that does shape the conversation and shapes the incentives that the private sector has to prioritize whatever interventions end up being required, whatever compliance ends up being required. And that I think, is what advocates have been asking for from governments for a long time, which is that we know what the issues are and we need to change the incentives. And that often is through regulation, to actually change decision making to change calculus of when is it okay to launch a product and when do you need to do more before you launch that product to prevent some kind of harm to society that might not manifest purely in economic terms?

Camille Morhardt  23:24

One more thing I want to ask is right in the title, the official title of the executive order is “the trustworthy development and use of AI.” Can you tell us how the government has defined trustworthy? And whether it’s got actual tentacles down into the technology when it says trustworthy? Or is that a subjective descriptor?

Chloe Autio  23:47

Yeah, I can just speak really quickly about how NIST, the National Institute for Standards and Technology, which is one of the leading players on AI policy in the government has thought about this, they actually took the time to sort of define what trustworthy AI meant, as opposed to, you know, using responsible AI or accountable AI or ethical AI, these terms that can be used interchangeably, but often are not very specific, right? I think NIST being a technical organization says, “Well, you know, we understand what it means to build technology in a trustworthy manner. And we’ve done a lot of work on trust as more of a technical concept. And so, we’re going to lean into describing our work in AI through the context of trust through the lens of trust, as a way to really lead.”  So I think that given this role in a lot of AI policymaking initiatives, trustworthy AI is a good way to describe all the things that we’re talking about when we say “responsible” and we say “accountable” and we say “ethical,” but sort of coming at it from more of that technical lens.

Miranda Bogen  24:43

One of the things that the administration I think is trying to really hold center is that ideally, the country will be able to take advantage of this new technology and that everyone will benefit from it. But that won’t happen if people don’t trust it. That’s the case in other industries as well in aviation, in automotive, transportation. And I think while there are some people who are saying that regulation is slowing this technology down, and that’s, that’s something to be concerned about. I think what we learn from these other industries is that’s just not the case. Seatbelts make people safer. We all trust that the food we eat is safe, because there is a regulatory structure that we know is keeping tabs on that and that is addressing issues when they arise. And I think that’s what the government is trying to create around AI. Because there have been so many stories about where it’s gone wrong–whether its facial recognition misidentifying people, people being denied loans, housing, jobs from this technology, that without some kind of guardrails, people are going to shy away from it. And then we won’t be able to take advantage of this technology, because people will be concerned, it’s not in their interest to do so.

Divyansh Kaushik  25:53

When cars were introduced, you know, we used to have what we call “red flag” traffic laws. It was required for a human to walk with a red flag six feet in front of the car, to warn the passengers that a car is coming. So when a new technology arrives, people are generally afraid of it. And it is the job of the administration to make sure that it can provide some certainty from its side that, “hey, look, there are all these guardrails in place to make sure this technology is safe, and it can be trusted to increase the adoption.”  Because that’s essentially what happened with cars eventually, you know, we don’t have people walking with red flags in front of the cars anymore, or they’ll get hurt probably

Miranda Bogen  26:39

An important point in how the executive order addressed the broad umbrella of what AI is, is that there are both known concerns about systems that are already deployed in the economy. And there’s been a lot of discourse about what the problems that technology poses might be. And we’re really close to putting the safeguards in place to actually tackle some of those things. But we’re not there yet. This is another step in the direction of setting up those safeguards. But there’s still more to do. At the same time, the technology continues to advance so quickly, that even completely different concerns have popped up in the last six months, that the government is also trying to proactively address. 

And looking at the last decade of technology, I think everyone regrets a little bit that we weren’t more proactive and had more foresight around what could go wrong. And we’re trying to clean that up now. And we’re trying to get ahead of that when it comes to AI. But we’re kind of at two different phases for the broad swath of automated technology and machine learning, simple AI, that’s already out there, there’s some clear things to do. And the government’s trying to really move forward on that and move to action where we can. And then there’s still we’re kind of chasing the newest technology and trying to make sure that people are wrapping their heads around it and there’s some basic guardrails in place so that we don’t find ourselves behind, again, seeing the harms, but not having the infrastructure in place or the incentives in place to address those harms. 

So I think it really did a good job at being broad enough to capture those different interests. But that’s always going to be the challenge in regulating technology is trying to stay ahead of it while also not getting it wrong. And I think that’s the most important thing now is to make sure that the approaches we know will work, including enforcing existing laws end up getting incorporated into how this technology is built, but also that we’re foreseeing where there are still gaps and where we need to come up with new approaches.

Divyansh Kaushik  28:38

And we’ll see new risks come up to right there are unknown unknowns, and then there are unknown unknowns.

Camille Morhardt  28:42

Well, yeah, we with the large language models and the you know, have just kind of exploded in the last year, I don’t think anybody anticipated how that technology was going to be used and how broadly it would be adopted, how quickly.  So expect will be more innovation similar to that, or radically different but also equally surprising. 

Well, Miranda, Divyansh and Chloe, thank you so much for joining today and walking us through these implications. Amazing how quickly you’ve each read and digested this very large executive order and we appreciate the insight.

Chloe Autio  29:18

Thanks for having us.

Divyansh Kaushik  29:19

Yeah, thank you.

Miranda Bogen  29:20

Thank you.

Announcer  29:24

Stay tuned for the next episode of in technology. Follow @TomMGarrison and Camille @morhardt on X to continue the conversation. Thanks for listening. 

Announcer2  29:35

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

More From
Andres Rodriguez Selvakumar Panneer Omesh Tickoo Sanjay Rajagopalan Chloe Autio AI artificial intelligence deep learning machine learning synthetic data large language models LLMs regulations

Top Conversations on AI in 2023: From LLMs to Regulations (186)

Vernetta Dorsey Windsong product security governance secure development lifecycle

What That Means with Camille: Product Security Governance (185)

Asaf Ezra runtime optimization Granulate

Runtime Optimization with Granulate CEO (184)