Skip to content

Emerging U.S. Policies, Legislation, and Executive Orders on AI

Play Video

Is AI still like the Wild West?

In this InTechnology video, Camille talks with Chloe Autio, independent AI policy and governance advisor. They get into the present landscape of AI policy and legislation in the U.S. and internationally, key discussion points in AI policy conversations, and the intersection of business and AI regulation.

The Present Landscape of AI Policy and Legislation

Conversations about AI regulation started years back, but tangible policies and regulations for AI, both in the U.S. and on the global stage, have been slow to materialize. The spotlight on AI has grown intensely over the past year due to the dramatic rise of generative AI. Chloe emphasizes that, although current legislation might be sparse, numerous global governments are rapidly striving to grasp AI’s intricacies and formulate suitable measures. Currently, the European Union is deliberating the proposed EU AI Act, an act Chloe anticipates will be implemented next year. The UK is gearing up for an AI Safety Summit to establish foundational principles on AI safety, and the G7 has initiated the Hiroshima AI Process. Chloe also touches upon China’s AI guidelines, which adopt a more focused perspective on AI governance.

In the U.S., it’s anticipated that President Biden will soon approve an Executive Order related to AI. This order will likely center on federal investment in AI, comprehension of large language models across governmental agencies, incorporating more AI specialists into government roles, and aiding various agencies in their tech oversight while collaborating closely with NIST. Other endeavors by the U.S. government in the AI regulatory realm involve the Biden administration’s association with the White House Office of Science and Technology Policy and Senate Majority Leader Chuck Schumer’s AI insight forums.

Key Focuses in AI Policy Conversations

Several critical subjects are currently taking center stage in AI policy dialogues. While many debates have evolved to address more speculative and broad-based AI concerns, Chloe notes a gap—many discussions now overlook immediate issues with AI, such as potential biases in training data. Other pressing matters include data insight ownership, privacy considerations, safeguarding IP and copyrighted materials, misinformation through deep fakes, and the regulation of open-source AI models. There’s also a push for a national AI research resource, termed NAIRR. Envisioned as a collaborative space for public sector researchers, academia, smaller firms, and students, this hub aims to democratize access to AI models, especially when privately training large models can be prohibitively expensive. Yet, endeavors like NAIRR await official sanction and financial backing.

The Intersection of Business and AI Regulation

Both tech giants and smaller enterprises are influencing AI’s application and its overarching regulations. Chloe underscores a notable event from earlier this year, where 15 major companies, including Microsoft, OpenAI, DeepMind, Google, Cohere, Stability AI, Nvidia, and Salesforce, among others, convened at the White House to align on AI’s security, reliability, and trustworthiness. The Frontier Model Forum, featuring prominent players like Microsoft, OpenAI, Google, and Anthropic, has been set up to devise best practices concerning AI watermarking, red teaming, and model assessment. Regarding other businesses, Chloe emphasizes the importance of comprehending AI’s role and purpose within their operations. AI can be beneficial, but it’s not a universal solution. Companies should critically evaluate the potential risks associated with their data and AI strategies.

Chloe Autio, Independent AI Policy and Governance Advisor

Chloe Autio artificial intelligence AI policy AI regulation

Chloe Autio is currently an independent AI policy and governance consultant located in Washington, D.C. She offers her expertise to top AI and tech entities, alongside government and civil society groups, focusing on AI policy and supervision initiatives. She formerly held the position of Director of Policy at The Cantellus Group. Before this, Chloe headed public policy at Intel, serving as Director of Public Policy, and progressing from her earlier roles as Public Policy Manager and Analyst. Chloe earned her B.A. in Economics from the University of California, Berkeley.

 

Check it out. For more information, previous podcasts, and full versions, visit our homepage.

To read more about cybersecurity topics, visit our blog.

#artificialintelligence #AIpolicy #AIregulation

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

—–

If you are interested in emerging threats, new technologies, or best tips and practices in cybersecurity, please follow the InTechnology podcast on your favorite podcast platforms: Apple Podcast and Spotify.

Follow our hosts Tom Garrison @tommgarrison and Camille @morhardt.

Learn more about Intel Cybersecurity and Intel Compute Life Cycle (CLA).