Skip to content
InTechnology Podcast

Emerging U.S. Policies, Legislation, and Executive Orders on AI (178)

In this episode of InTechnology, Camille gets into emerging artificial intelligence policy with Chloe Autio, independent AI policy and governance advisor. The conversation covers the current state of legislation and policies surrounding AI in the U.S. and internationally, the biggest concerns about AI in policy discussions, and how big tech companies and other businesses are influencing AI use and regulations.

To find the transcription of this podcast, scroll to the bottom of the page.

To find more episodes of InTechnology, visit our homepage. To read more about cybersecurity, sustainability, and technology topics, visit our blog.

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Follow our hosts Tom Garrison @tommgarrison and Camille @morhardt.

Learn more about Intel Cybersecurity and the Intel Compute Life Cycle (CLA).

The Current State of AI Policy and Legislation

While the conversation about AI regulation began years ago, there has not been much progress in enacting official policies and regulations around AI, both in the U.S. and globally. It was only this past year with the explosive emergence of generative AI that the focus on AI has become very heightened. Chloe shares how despite the lack of legislation at the moment, many world governments are moving quickly to really understand AI and take appropriate actions. Right now, the European Union is in trilogue over the proposed EU AI Act, which Chloe believes is likely to go into effect next year. The UK also has an upcoming AI Safety Summit to create some principles and guidelines on AI safety, while the G7 now has the Hiroshima AI Process. Chloe also gets into China’s AI standards, which take a more narrow approach to AI oversight.

Meanwhile, in the U.S., President Biden is expected to soon sign an Executive Order regarding AI, focusing particularly on investment and understanding among federal agencies and the government on large language models and how they can be used, workforce implications in regards to getting more AI and tech talent into the government workforce, and how to help different agencies in how they control and oversee tech while working closely with NIST. Other U.S. government efforts regarding AI regulation include the Biden administration working with the White House Office of Science and Technology Policy, as well as Senate Majority Leader Chuck Schumer’s AI insight forums.

Hot Topics in AI Policy Discussions

There are many important topics in AI policy discussions right now. While many of these discussions and debates have shifted to looking at more distant and abstract concerns about AI, Chloe says what’s now missing from many of these conversations is addressing the more present-day concerns about AI, such as models being trained on potentially biased data. Other concerns include data insight ownership, privacy, protecting IP and copyrighted content, misinformation with deep fakes, and regulations of open source models. Additionally, there is an initiative to create a national AI research resource, or NAIRR. This resource would be a place where researchers from the public sector, academia, smaller companies, and students could learn and collaborate with AI models as a public resource, as privately training LLMs and foundation models now costs millions and even billions of dollars. However, something like NAIRR still needs authorization and funding in order to get started.

How the Business of AI Affects Regulation

Big tech companies and smaller businesses alike are affecting how AI is being used and AI regulation overall. Chloe highlights how earlier this year, 15 companies—including Microsoft, OpenAI, DeepMind, Google, Cohere, Stability AI, Nvidia, Salesforce, and others—were invited to the White House to agree to certain commitments on the security, safety, and trust in AI systems. There is also the Frontier Model Forum consisting of Microsoft, OpenAI, Google, and Anthropic to develop best practices for AI in regard to watermarking, red teaming, and model evaluations. As for other businesses, Chloe says they should understand the use of AI in their business and what they want to use AI for. While it’s a helpful tool, not every problem needs to be solved by AI. There are also many risk management questions businesses should ask themselves about data and AI models.

Chloe Autio, Independent AI Policy and Governance Advisor

Chloe Autio artificial intelligence AI policy AI regulation

Chloe Autio is a now-independent AI policy and governance advisor based in Washington, D.C., where she provides her services to leading AI and tech organizations, as well as government and civil society organizations on initiatives related to AI policy and oversight. Previously, she was the Director of Policy at The Cantellus Group. Before that, Chloe led public policy at Intel as Director of Public Policy, building on from her prior roles as Public Policy Manager and Public Policy Analyst. She has a B.A. in Economics from the University of California, Berkeley.

Share on social:


Camille Morhardt  00:28

Hi, I’m Camille Morhardt, host of InTechnology podcast. And today I have with me Chloe Autio. She is an independent advisor on AI Policy and Governance based in DC. And we’re going to cover AI policy, AI legislation, AI regulations, AI executive orders, everything somebody might want to know about it. We’re going to focus on United States policies, but we’re gonna have a little bit of a glance or window into global policies as well. Welcome to the podcast, Chloe.

Chloe Autio  00:57

Thanks so much, Camille, it’s great to be back with you. And I’m excited to chat today.

Camille Morhardt  01:00

I know very little about these policies and regulations. So, I’ve heard probably like everybody else, you know, the news and kind of like, what’s coming? And should we be worried.  We hear you know, Elon Musk tell us we should all be very, very worried. And we need regulation and checks for AI. And then we hear from other people that it’s really just a tool and we’re over worrying, and we don’t want to stifle innovation. So can you give us like a little bit of a framework or lens to even approach the conversation, and then we’ll kind of walk through various topics.

Chloe Autio  01:32

Absolutely. And I think that that grounding is really, really important. The US and global discussions have been talking about AI regulation for almost, I don’t know, five to 10 years now. Right? The first discussions that happened in the US started in about 2016/2017 in Congress, and actually in the Obama White House a little bit before that with a white paper on sort of what artificial intelligence was and what it wasn’t. And those discussions and talk about the need for oversight of these AI tools, particularly increasingly powerful ones, has sort of built over the last five years.

The floodgates of the discussion have opened this year with the advent and uptake of really popular generative AI tools like ChatGPT, and DALL-E and Midjourney and Stable Diffusion that really sort of put this technology in the hands of consumers and policymakers in a way that I think felt a lot more real. And so the focus from policymakers on needing to oversee and control and really understand better, this technology has been really, really heightened in the last six months.

Camille Morhardt  02:40

Are there any laws or legislation on the books? Or are we still in sort of like guidance? And, you know, policy or executive order space?

Chloe Autio  02:50

There’s so much out there right now on AI regulation, on AI policymaking, AI investment by government. And to spare you a lengthy recited memo, sort of, of the tours of the AI policy world, which I do a lot of, for, for the work that I do all sorts of focus on trends and, and sort of new happenings, particularly in the US.  But really, really quickly, you know, obviously around the world, we have the US AI Act, which has also been in development for, you know, three to four years. The European Union is currently in what’s called a trilogue. So sort of a multi-stakeholder, intergovernmental discussion, really about the final issues in the EU AI Act. And from there, you know, the law will be adopted and likely go into effect sometime next year. So organizations and governments across the world will be–and are–really paying attention and sort of reacting to that. Other countries, obviously, are sort of looking at where they can step into the AI regulatory debate. I won’t go into all of them, I think, talking about the EU AI Act. While it’s not the most important regulation, it’s definitely sort of the fastest moving and most well-known broad regulation on AI, that’s actually going to come into force relatively soon, and so has driven a lot of the discussions.

But to bring it back to the US, there’s a lot happening, right? The Biden administration and the White House Office of Science and Technology Policy, and other White House offices have really, really focused on AI as a primary policy issue and an issue of focus, particularly with how to the point I made earlier, consumers have really, really focused on and started to play with these tools in a way that I think we haven’t really seen before, right? In just two months, ChatGPT became the fastest growing, growing, you know, consumer tool with to reach 100 million users and that’s, that’s crazy right before that. We were talking you know, TicToc and Meta and so to have these kinds of technologies in the hands of people also raises the questions about how they can be used by bad actors, right, malicious actors. And so I think that, you know, a lot of offices within the White House and across the Executive Branch have sort of leaned into this discussion and said, “Hey, what do we need to be doing to control these technologies?”

So earlier this year, you know, the White House got about 15 companies to agree to a number of commitments on the security, the safety and trust in AI systems, and particularly focused on these really powerful foundation models, that form sort of the basis for large language models and the chat bots and really, really powerful models that have captured public attention lately. The administration is also working on an Executive Order, which I’m sure you’ve heard about–it’s been delayed quite a few times, but is, I think, finally expected to come out around October 30, or 31st. And that will really focus on more investment and sort of understanding among federal agencies and the US government on these LLMs and how they can actually be used. It’ll also focus on some workforce implications. So how can we sort of get more talent into the US government, we really, really desperately need, more AI talent, more tech talent, generally, into federal agencies. And so the EO will focus on.  There are a few more things that it will cover. Many of them have been sort of teased or fabled in the news. But broadly, you know, we’ll also focus on really helping different agencies sort of understand how they can control and oversee these technologies at a high level and working really closely with NIST, the National Institute for Standards and Technology, to sort of create testing and guidelines around using these really powerful models.

Camille Morhardt  06:16

I have a few questions off of that. Could you back up to the cadre of companies that came together–tech companies– and this was in the news and, announced a commitment that they were making. Can you remind us who they were or who several of them were? And what does this mean, and the fact that they’re coming in stating what they’re willing to do versus, you know, being told what to do from legislation or from Executive, give us a sense of what does this mean that these companies are coming together?

Chloe Autio  06:47

I think what it really means, Camille, is that, you know, these companies understand that they need to do something; that they need to demonstrate some sort of responsibility and willingness to work with government on controlling these issues. And not just the perception of the concerns around generative AI, but also what they’re actually doing to govern and control these really, really powerful models. And I think it’s really important that in addition to sort of the big players that are chronically in the news about generative AI, you know, Microsoft, OpenAI, Deep Mind, Google, in the second round of commitments some of these smaller companies, Cohere, Stability AI, Nvidia also came together Salesforce, right and said, you know, we’re also going to commit to test and look into what we can do around watermarking. We’re going to commit to setting up third party or external red teams, internal and external. So we’re bringing sort of outside stakeholders in to evaluate and interrogate our models and really get feedback from them. And also invest in you know, better cybersecurity infrastructure, which is something that underlies all of this. And up until sort of recently has not been as much a part of the discussion.

So, I think, what this means and what we’ll see is these companies kind of working together more to really build and develop some of these standards. And we’ll see that too, are one of the outcomes of these commitments, at least the initial round, was the formation of something called the Frontier Model Forum, which will be sort of it’s sort of an industry coalition, but not quite, made up of, you know, the four largest players kind of in this space–Microsoft, OpenAI, Google, Anthropic–and they’re going to be coming together and really sort of trying to develop best practices related to all of these things related to watermarking related to red teaming, related to model evaluations, and kind of working together in a way to demonstrate that you know, they’re taking these issues really seriously.

Camille Morhardt  08:29

Can you pause, and just tell us what watermarking is? Because I think you explained red teaming well, but—

Chloe Autio  8:34

Yeah, so watermarking is a technique that involves embedding a signal, and a piece of data or a piece of content, essentially, to identify its provenance–where it came from, who made it, have any changes been made to it? Was it manipulated? And this is really important in the context of AI. Because one of the greatest concerns right now with these generative tools, in addition to copyright and IP, right, have these tools or works been created using copyrighted works? A big concern, in addition to that is misinformation. Right? Are we looking at deep fakes. You know, a video of the president tripping and falling in a bad way, right, can have implications for how people view him and how people think about, you know, the information that they’re getting. It also has impacts for trust in technology generally. And so inserting a watermark, on a piece of content can sort of give either developers or users a better sense of where that technology came from, including if it was AI-generated and not actually real.

Camille Morhardt  9:32

So, is there any noticeable misses in the companies that are coming to the table?

Chloe Autio  9:38

That’s also a really good and super important question right now. And it’s really important because so much of the focus right now in policymaking–whether it’s in Congress was sort of Schumer’s AI insight forums, or even these White House commitments–has shifted to focus on these really powerful models. But the reality is, is that most AI development is not happening at that level; it’s in an across the enterprise. It’s not foundation models. It’s not, you know, models trained with billions of parameters like these foundation models are.  It’s what I maybe would call clunky AI, you know, computer vision models, reference implementations. And I’m not to say that these technologies aren’t advanced, but the technologies and sort of intensive data workflows and applications that are really being used and adopted right now that are creating real harms, like bias algorithms used in hiring contexts, algorithms used to make decisions about loan eligibility, that sort of thing. And so I can’t say that one company or one organization is missing, per se. But I think what’s missing out of this conversation, particularly in policy circles, generally is that focus on, how can we address the harms? And the concerns with AI that are happening today? And how can we sort of shift the focus back or at least maintain focus on the AI that was being used, and that needs governance and regulation before ChatGPT and foundation models entered the room?

Camille Morhardt  11:03

One other question I have is, when we see larger companies come to the table–granted, there’s a few smaller ones–they are often the very companies that are benefiting and collecting the most information and have the resources to process it, use the models, train the models, and then implement whatever it is they’re going to implement for their benefit and potentially other people’s benefit. But is there any space given or thought given or consideration at this point to making the information that’s collected by large aggregation companies available to let’s just say, the public or developers more broadly so that the concentration of information is more distributed or accessible?

Chloe Autio  11:55

Yeah, it’s a good point. And I think that this has been sort of a focus of policymakers, but more so in the context of sharing sort of compute and resources and data. So one of the big policy initiatives right now, there’s actually a bill up on The Hill, is the creation of a national AI research resource. The NAIRR for short; we’re talking about a big AI computing cloud, full of a lot of data, and GPUs and CPUs and a place basically where researchers from the public sector, from academia from smaller companies–even students right–can come together and sort of work with and play with data AI models and learn have urged, you know, public compute, basically to broaden access. As you know, compute is extremely expensive training these models, particularly these large language models and foundation models can take millions of dollars, billions of dollars sometimes, right? as we’re seeing a lot of these big companies pair up with large infrastructure clouds. So creating this space, through the NSF and the Office of Science and Technology Policy, for different researchers from the public sector, and not just private sector just sort of come together and share resources has been a big focus of a lot of policymakers. I will say unfortunately, though, it has not yet been funded. So there’s been a lot of work to sort of study what such an infrastructure such a NAIRR would look like. But we need some bills to pass in Congress to really authorize and appropriate that funding next.

Camille Morhardt  13:17

Okay, well, let’s move into legislation.  So what is going on with legislation both in the US and more broadly?

Chloe Autio  13:23

Yeah, so believe it or not, members of Congress are still in a very sort of like fact-finding stage with AI. This may feel surprising to folks who have testified in AI hearings, or helped develop AI hearings, or have sort of been in touch or keeping track of AI policy proposals, like the Algorithmic Accountability Act, or the AI and Government Act, which are bills and proposals that have been introduced in Congress in the last several years. But again, with this new focus on generative AI, a lot of the members in Congress have said, “Okay, hang on. We know we need to do something, but we really want to understand what’s going on.”  I think maybe helping them sort of place places in this moment of AI development.

And so the biggest convening that people are talking about is Chuck Schumer, the majority leader, Senate Majority Leader from New York, has convened these AI insight forums, so he calls them and the goal really is to sort of bring together experts from across the AI field and other disciplines to educate members of Congress, really on their staff, on all sorts of things related to AI–risks, benefits, harms, open source, different types of proposals around licensing.  I think we’ll expect to see toward the end of the year some sort of proposal from the Majority Leader and his co-sponsors, co-collaborators I should say, on AI issues, in a bipartisan way on an AI framework are a framework for AI safety that also promotes innovation.

Camille Morhardt  14:48

Are there multiple areas of focus that we know of so far that legislators are concerned with? Or is there a predominant one, like, I can think of a few things like securing it, let’s prevent disinformation; I hear ideas of let’s make sure we enforce privacy among, you know, people who are contributing their data. Or is it like, “no, there’s actually five different things we need to think about, you know, how to address in legislation?”

Chloe Autio  15:14

Yeah, you know, I think it really varies depending on the member, right, and their priorities and their politics, honestly. But most of the bills I’m seeing are really focused weaving the right balance, threading the right needle between protecting rights and values, and civil liberties, and also fostering innovation in the technology or at least not limiting innovation too much. And an underlying theme that really supports that is, as you mentioned, national security in competition with China. That is definitely one of the themes that gets a lot of attention on the Hill. As you know, you know, China is our global competitor. They’re also a major competitor in developing AI research and cranking out AI journals and citations and papers and making contributions to the AI space. And so there’s a lot of concern from US lawmakers about sort of what’s been dubbed the “AI race,” though I’m not enabled, but that term, to really make sure that the US stays competitive as a government and as an industry in developing really powerful AI models, and also preventing China from getting too powerful with these with AI development.

Camille Morhardt  16:22

So we talked about the US and the EU, but are there other legislation or policy or regulations or standards or guidelines being developed in other parts of the world or other regions?

Chloe Autio  16:33

Yeah, so one of the one of the things that’s getting a lot of attention, in fact, will be coming up in a few weeks here is the UK’s AI Summit. And interestingly, the UK has sort of positioned itself in between the US and the EU in terms of its approach to AI oversight for sort of thinking about this on a spectrum, the US hasn’t exactly come out with a an omnibus or broad sort of AI regulation. The EU, on the other hand, is in the process of doing that as we speak. And so the UK has sort of said, we’re going to take an innovation-forward approach, while also thinking about a lot of these concerns and risks. And they put out a great white paper earlier this year. But will be hosting actually an AI safety summit on the second of November. And the Prime Minister, Rishi Sunak, has said this is a really important priority for him, and if not one of the largest priorities in his government. And that summit will bring together a lot of the AI model developers that we just talked about governments from around the world. And they’ll look to create some principles and guidelines around AI safety, and sort of work in a really multi-stakeholder fashion to, you know, put forth some kind of agreement on AI safety, the details of which have been pretty broad for most of us who have worked in this area and field for a while. But I’ll be interested to see what tangible outcomes come from come from that.

In addition to the UK Summit, you know, all of these countries all over the world have been participating in the G7, around governance principles in the Hiroshima Process, which explicitly focuses on, you know, creating some sorts of agreements on AI development and use. So lots of multi stakeholder, multilateral international coordination on these issues. And we’re also seeing, you know, different countries sort of making advancements to kind of lead and say, you know, “we’re going to be the convener of these issues, and we want to play that role.”

Camille Morhardt  18:18

Is there any major disagreements at this point that are cropping up between, you know, different countries? Or geographies?

Chloe Autio  18:26

It’s sort of how your national policymaking and standards goes generally, right? Like, every country, every citizenry has a different approach to values related to technology governance, that are based on, you know, norms and cultural values that are, you know, independent and context specific within their country, right, or communities. And so, you know I think there’s a lot of focus and desire, particularly from industry, to have sort of more international coordination on global governance. But I think that these discussions will remain pretty high level in nature. And that’s, that’s how a lot of these processes go right. Internet governance, privacy principles, they sort of can help set a standard. I’m thinking, for example, like the OECD privacy principles, right? That was a really a leading body of work that helped sort of set that value-aligned framework for thinking about privacy regulation. But it was up to each individual country to kind of take that and make it their own and adopt it and run with it in the context of, you know, AI use in their country and cultural norms. And so, I think we’ll probably see something similar, but at this stage, not a lot of broad or fraught disagreement around what AI development should look like, rather a broad sort of consensus that, you know, we want to be developing these technologies for good. And the way to do that is to acknowledge and manage the risks and harms while also not totally shutting it down or pulling the plug in a way that will prevent any one country from being competitive.

Camille Morhardt  19:53

So, has China, because you brought them up before, as a major player in AI? Do they have? Have they developed regulations or standards or policies?

Chloe Autio  20:01

Well, mostly standards. So there are like three major standards in China, that focus on sort of AI oversight, but they’re very, very narrow, which I think is actually good, because it does something right. One of them is that if there’s any sort of AI application or decision being made, or AI generated content, like there’s a disclaimer for any kind of what they call synthetic media, actually, in any kind of Chinese commerce context, which I think is really cool and sort of consumer facing. And the Ministry of Information and Technology, there has actually been really sort of front footed on developing different types of standards. Now, of course, there’s a question around how the CCP and their influence on Chinese companies actually affects enforcement of those standards, or like, how that actually works. At the same time, they’re using surveillance technology to persecute different minority populations. And you would think that, this would be a consideration, too. But the regulations and sort of standards that they have advanced are, like I said, a lot more narrow and sort of focused on specific technology applications and like consumer awareness building, which is really, really important. Like the way that TikTok looks in China is completely different than the way that it looks here. And they have all sorts of restrictions on like, children’s screen time usage, and like gambling and video game addiction, that sort of thing. And I think we could really benefit from some of that stuff here.

Camille Morhardt  21:23

I’ve heard it said that US innovates and EU regulates.

Do you believe that some of the policies if the EU is able to actually form some standards or policies with respect to AI, will those transfer over to the United States

Chloe Autio  21:40

We saw this certainly with privacy where, you know, the US has no omnibus privacy law whatsoever, and the EU passed the GDPR, and said, “Okay, in absence of US law, and now that privacy shield has expired, data transfers to the EU are no longer adequate. And we don’t feel comfortable sharing EU citizens’ sensitive personal data with US entities or having them store or exchange any of that information across borders.” And I think we’ll start to see some of that with AI. How the EU AI Act will actually work in implementation sense is like a huge, huge question. There’s a lot of third party access and oversight involved in EU Act compliance through what they’re calling conformity assessments. And so, you know, how those are going to work in the EU is still a question and how they’ll work in terms of like, inadequacy deal or decision with the US I think, is another issue. And so, yeah, there’s just so much kind of TBD with that. But I would hope, really, that some of what’s happening in the EU would translate over here in a more meaningful way than like, do you accept cookies or not? Which is kind of what the GDPR has become.

Camille Morhardt  22:56

So another question, you brought up privacy and I’m wondering if things like regulations and standards, etc. or agreements around topics like privacy or cybersecurity, if those map over into AI, like, “well, whatever we said about data with respect to privacy before applies now to AI.” Or whether AI and large language models are fundamentally different in the way that they collect information and then generate insights based on an aggregation of that data. And now, who owns those insights, right, as opposed to “here’s the piece of data, and I can track what it is and who owns it always.”  If I’m generating an insight from a bunch of data. Now, who owns it? And now what does privacy mean?

Chloe Autio  23:39

Yeah, one of the things that I think about a lot, and that I think a lot of policymakers are focused on, is figuring out how we can leverage existing frameworks for cybersecurity, risk management, thinking about the security of critical infrastructure, privacy laws, good consent practices, you know, good data mapping, and governance practices. These are things that industry and governments have been doing and thinking about for a long time and to say, you know, we need to reinvent the wheel for AI, I don’t think is the right approach.

At the same time, I think that AI, particularly these generative AI models do raise a lot of new and novel concerns about the things that you mentioned, right? If I’m building a data set, that’s not for an LLM, I can, you know, look at my consent and obviously, this is an oversimplification, but, you know, I have my data sources, if it’s PII and sort of understand my consents and have maps with where that data came from. And then I’m going to look at my use cases and say, you know, what regulatory environment, what industry specific regulations apply to the ways in which I’m going to use this data.

The problem with foundation models is that the data used to train them has come from, you know, millions, billions of sources, sometimes off the public domain from Wikipedia posts that may not be public/private Facebook posts, that, you know, may have just been out there for years now, where people weren’t really sort of understanding of what they were putting on the internet and where and who had access to it. And all of that information, all of those probabilities and weights have been baked into these foundation models, which are now being inserted and used and sort of implemented into different tools. And so that is something that we really do need to sort of think about and consider, and particularly in the really obvious context, right, for intellectual property and copyright. And obviously, I’m sure you’re well aware of the, you know, the writers and the actors strikes, where there’s a lot of concerns about, you know, the use of my body or my image or my retinal gaze, right? How do I control those things? And how should I be able to control those things are all really important questions that I think I’ve been really saddened by this discussion.

But when it comes to sort of general technology governance, right, we’ve been doing this for decades. And so we’re we can sort of lean on and learn from technology governance frameworks and technology standards, right–either privacy standards, or sort of general risk management standards–we really should think about doing that. And I know and many people are, but it’s just to say that, you know, I think there’s this tendency to think about AI like this big, flashy thing.  It’s an entirely new problem. And the reality is that, you know, we’ve managed technology solutions for a while, obviously, not always perfectly, and it does come with risks, but we have frameworks to think about that and we should be using them.

Camille Morhardt  26:18

Yeah, so okay, so if you’re a business and I will, I’ll just say, if there is a difference between small or large, maybe you can qualify your answer, but what are the main things that you should be like aware of and tracking right now if you’re working with AI?

Chloe Autio  26:37

The first thing that I always advise organizations to understand is–and this may sound basic, or sort of like a step back–but just really understanding AI use in your business, and what you want to use AI for–whether you’re developing it, whether you’re procuring it, whether you’re making a code development.  A lot of government and public servants that I talked to, you know, are sort of bombarded chronically by new flashy AI solutions and tools. And obviously, this is an issue in industry, too. And some of these tools, you know, purport to solve a problem that we may not actually need AI to solve. We always need to be asking ourselves, you know, what is the problem that I’m trying to solve right now? What is the business case? What is the business issue? And is AI the right solution? And then get to the question of sort of what kind of AI right? Am I building it? My buying it? Am I working with a supplier or a customer?  Because those discussions will inform the next set of questions that are really important for risk management, right? Where did the data come from? What data do I want to use? Do I need to buy that data? Do I need to get a license for it? Do I need to obtain new consents are going to be building a data set myself? If you’re bringing a model into your data environment, what do I need to be thinking about in terms of outputs? Who owns the outputs of the model combined with my data and external data? These are all questions that at least in a lot of government context, are still being sorted out, because there aren’t a lot of rules around procurement and AI procurement specifically in this country.

So, thinking about where data is coming from and what you’re going to be using it for. And then, you know, obviously, the AI use case and just getting really, really clear about and precise about what that is. So that you can really understand the risks is really important. That’s where I would start.

Camille Morhardt  28:15

And so what else should I have asked or should I be asking that I haven’t asked yet on this topic?

Chloe Autio  28:22

One thing that’s captured a lot of attention right now, particularly in Washington, is sort of this debate around open source, and how we should be controlling open source data and models, specifically, versus, you know, controlling through something like a licensing regime, which a number of these big AI labs have come out and support it. And it’s turning into a big thorny debate as a lot of these things do in Washington, but I think we’ll see some more discussion on that topic, certainly in the coming months, and I think that the licensing argument and whether or not the US needs to set up an independent agency to oversee, oversee that licensing regime, and these risks, whether that’s needed will be will be part of the conversation for sure.

Camille Morhardt  29:02

What are the main components of the thorny debate? Is that the main component or what are the other ones?

Chloe Autio  29:07

There is a concern with a lot of the development of these models that, particularly if they’re open sourced, if the technology itself, the techniques, the access to the models is open, that you know, they will fall into the hands or be used by malicious actors who can use them to do things like you know, develop a bio weapon or use them to manipulate somebody who has access or control to those types of things, right? Use them to manipulate systems or infrastructure to get access to, you know, powerful codes or something like that. You’ve probably heard a lot about this sort of existential risk topic, which basically describes sort of these, you know, catastrophic threats to humanity or human existence that can be created by AI models. The concerns around that, and access to these models has sparked this debate around how we control them, right. And one of the primary Solutions has proposed by Sam Altman from Open AI, and someone supported by Microsoft a little bit more delicately, has been the creation of a licensing regime administered by a new federal agency to say, you know, what are the thresholds for compute or model performance that we need companies to obtain obtaining a license for, or licensed for to sell. And obviously, you know, this has implications for access. Because only really powerful companies with a lot of resources, not just for compute, but for compliance, will be able to participate in such a licensing regime. But it will continue to be sort of an issue. And of course, you know, on the open source, side, it’s really that, you know, these models should be available for use, people should be able to use them. Currently, there’s a huge power imbalance and access with who can actually build and develop these models, highly concentrated in these big labs backed by large compute providers. And, you know, a lot of these capabilities should be open so that people in the broader community can learn from them and build things that are good, too. So it’ll be interesting to see how that one plays out. But I don’t think that the US is ready personally, for a new agency, or could even make that happen in Congress to administer or create such a licensing machine.

Camille Morhardt  31:10

Personally, are you optimistic or pessimistic about the future of all of this policy around AI?

Chloe Autio  31:17

But I’m kind of pessimistic I hate to say it. These technologies are really powerful, but Congress is so broken.  And they’re just not really able, I think, to kind of place what’s going on in the broader context of like technology, governance and development. Like, we’ve been talking about responsible AI for almost 10 years.  I helped build the responsible AI program at Intel in like 2017 was when this all kicked off, and there’s still really no incentive for companies, organizations to take this work seriously, and the fact that Chuck Schumer, arguably the most powerful, one of most powerful people in government period, is sort of spending his time bringing, like Marc Andreessen and Elon Musk, and all of these, effectively effective altruists–which is important–but not the bulk of the AI debate, into these insight forums to kind of talk about things like catastrophic risks to civilization, instead of like, not getting a loan, because the bias data has been used to train like a sort of dumb algorithm being used by Defense, or a government contractor has just really kind of distorted this debate in DC. And so it makes me feel kind of generally pessimistic. I tried to not be so pessimistic on podcasts and things like that, because I think it would be a real doozy. (laughs)

Camille Morhardt  40:55

I think it’s, it’s important to have that perspective, though. Like you point out, I mean, there’s sort of like the very practical things that we have to get to, if we want to–like things that are being used on kind of a daily basis, these smaller things, but that make a big difference in individual lives. Versus, the gray goo. And what do we do about that? That’s, you know, because even if we regulate that within one country, there’s the rest of the world, you know (laughs).

Chloe Autio  33:02

Yeah, that’s exactly it. And hopefully, I tried to kind of cover that, which is like, you know, we could talk about foundation models a long time. But really, what we also need to be focused on is just like, general risk management and data management of these technologies that are the ones being used right now. So, yeah, it’s a strange time in policy, and just generally, and I think with the election coming up, we didn’t get into this much, but I think we’re really in for one with misinformation and, you know, artificially generated content and tools.  And watermarking is a solution, but if something is fake with a watermark on it, it doesn’t really do anything for anyone. So I think, yeah, this video editing, I guess it is video editing, and then sort of generation tools are getting really, really powerful and robust.

Camille Morhardt  33:52

Do you worry at all? Also, though? I mean, I feel sometimes like the misinformation topic is, well, it has multiple sides, right, or it has multiple perspectives. And you know, one is that somebody is going to generate something that’s not accurate and so we better have some stops in place so that people can understand whether something is true or not, you know, actually happened or didn’t actually happen. But the other side of it is, who now is getting to decide what’s true? Or, you know, saying this is misinformation? Or this is not misinformation? And so are we going to put the power in the hands of the few to determine that or, you know, to regulate that? And so, do you think about that side of it also?

Chloe Autio  34:38

Yeah, definitely. But I think it’s, it’s a good point, like, who Yeah, it’s basically who gets to decide. And I think a lot of the technology, at least to my understanding, and I’m writing a paper on this right now, isn’t, isn’t actually good enough to be able to provide an ecosystem-wide solution to any of this. But I think what we need to do is kind of give consumers who are already so politically disengaged, like a better sense of where to get trusted information. But back to the point I was making, like the technology innovation, you know, like DeepMind, created this thing called Synth ID, where they’re basically embedding like a watermark on photos generated on their developer tool Virtex, which is, like, great for content on the developer tool. But, you know, if you’re not using if something’s not coming from there, it’s not really helpful. And so, I don’t know, I feel like industry almost needs to come together. I know they’re doing some of this in the C2PA. That organization that Intel has been a part of, for a while, actually with Adobe and others to sort of try to develop some tools for provenance and watermarking. But I don’t think it’s moving fast enough and even then, like, not to get too political. But I think that, you know, the way that politics and, you know, elections have gone down in this country in the last, you know, five years, have created, like, much bigger problems that technology won’t be able to solve. And people are more focused on these technology fixes, right, than like these big issues. But they’re not the things that people want to talk about because they get too political.

Camille Morhardt  36:10

Wow, Chloe, Autio, independent adviser on AI policy and regulations and laws. Thank you so much for joining us today. I feel a little bit smarter right now. And also, like, there’s so much to go read up on to get smart on this topic.

Chloe Autio  36:25

So, so much. You know where to find me if you need any help. And thank you so much. It was so fun to join you again, Camille. And yeah, we’ll see what happens this year and beyond.

More From