Skip to content

Top Conversations on AI in 2023: From LLMs to Regulations

Play Video

How many new conversations about AI took place this year?

In this InTechnology video, Camille explores some of the key AI discussions of the year. The segment begins with a focus on deep learning, a sophisticated form of machine learning, featuring insights from Intel Fellow Andres Rodriguez. Following this, Intel Labs’ Principal Engineers Selvakumar Panneer and Omesh Tickoo discuss the concept and applications of synthetic data. Sanjay Rajagopalan, Chief Design and Strategy Officer at Vianai Systems, then delves into the capabilities and prospects of large language models, or LLMs. Concluding the episode, Chloe Autio, an AI policy and governance advisor, highlights the latest trends and updates in AI regulation.

Making Machines Smarter with Deep Learning

Andres outlines the distinction between conventional machine learning and deep learning. The key difference, he notes, is that deep learning involves applying numerous layers of transformations to the input data. This progress is made possible by modern advances in computation, enabling the training of both larger and deeper models with access to extensive data sets. Despite these advancements, challenges persist, such as the need for vast amounts of data and significant computational resources to train these models. On a positive note, once a model undergoes initial training, it can be further refined with a smaller, more targeted data set to meet specific requirements. This subsequent stage of adaptation is referred to as fine-tuning or transfer learning.

Watch the full episode here.

Building Real World Systems with Synthetic Data: Wise?

Selvakumar notes how there are two different kinds of synthetic data, one generated with programming models and the other generated using AI. Omesh then explains the benefits of using generated synthetic data to help build new AI models, giving examples such as working in media or defect detection in manufacturing. Selvakumar also shares how synthetic data is being used to train autonomous driving vehicles.

Watch the full episode here.

Why and How Enterprises Are Adopting LLMs

Sanjay starts off by describing the functioning of large language models (LLMs) and notes that while they frequently provide accurate responses, they can also assert incorrect information with confidence. This tendency arises because they are designed to predict the most suitable next word in a sequence, relying on the extensive data they have been trained on. LLMs generate responses that appear correct based on this training data, regardless of the actual accuracy of the information. Concurrently, Sanjay points out the various advantages of using LLMs, such as enhancing workflows and contributing to business value. However, he emphasizes the continual need for human supervision in these scenarios.

Watch the full episode here.

Emerging U.S. Policies, Legislation, and Executive Orders on AI

Chloe provides an overview of the present landscape of AI regulation, especially in the United States. She observes that the majority of the proposed bills strive to strike a balance between safeguarding individual civil liberties and promoting technological innovation. Despite the intense global competition in AI, notably with China, Chloe points out that most AI developments differ from the large-scale foundational models. Instead, they tend to be more rudimentary AI solutions, tailored for specific applications. Nevertheless, she underscores the importance of directing discussions on AI regulation toward addressing the challenges and potential risks associated with AI.

Watch the full episode here.

Andres Rodriguez, Intel Fellow

Andres Rodriguez deep learning machine learning

Andres has been with Intel for over seven years, starting as a Machine Learning Engineer, moving on to Sr. Principal Engineer, and now serving as an Intel Fellow. His current role sees Andres providing technical leadership across Intel for AI software libraries and hardware products, as well as working with customers of Intel to accelerate their AI workloads with Intel’s hardware and software. Andres’ educational background includes a Ph.D. in Electrical and Computer Engineering (emphasis on Machine Learning) from Carnegie Mellon University, along with master’s and bachelor’s degrees in Electrical Engineering from Brigham Young University.

Selvakumar Panneer, Principal Engineer at Intel Labs

Selvakumar Panneer synthetic data

Selvakumar is certainly an expert in synthetic data, having over 25 years in interactive graphics research, 3D graphics and gaming, and GPU driver development. His first period of time with Intel was as a Senior Software Engineer from 1999-2004. Selvakumar then re-joined Intel in 2009 as a Graphics Staff Engineer, working his way up to Senior Graphics SW Architect and now Principal Engineer in Graphics & AI.

Omesh Tickoo, Principal Engineer at Intel Labs

Omesh Tickoo synthetic data

Omesh Tickoo has spent almost two decades at Intel. Joining first in 2005 as a Senior Research Scientist and Engineering Manager, he has now been a Principal Engineer since 2015. Prior to Intel, Omesh received a Ph.D. in ECSE (Electrical, Computer, and Systems Engineering) from Rensselaer Polytechnic Institute. He now also volunteers as an instructor with Logical Minds to teach kids about programming fundamentals and logical thinking.

Sanjay Rajagopalan, Chief Design and Strategy Officer at Vianai Systems

Sanjay Rajagopalan LLMs large language models language models

Since 2019, Sanjay Rajagopalan has held the position of Chief Design and Strategy Officer at Vianai Systems, a startup specializing in providing an enterprise AI platform and AI solutions. Before this, he served as the SVP and Head of Design and Research at Infosys and has fulfilled several other leadership roles in the tech sector. With a fervor for technology and business, Sanjay is a distinguished leader in the realms of design, innovation, and technology strategy. He earned his Ph.D. from Stanford University and an M.S. from The University of Texas at Austin, with both degrees in Mechanical Engineering.

Chloe Autio, Independent AI Policy and Governance Advisor

Chloe Autio

Chloe Autio is currently an independent AI policy and governance consultant located in Washington, D.C. She offers her expertise to top AI and tech entities, alongside government and civil society groups, focusing on AI policy and supervision initiatives. She formerly held the position of Director of Policy at The Cantellus Group. Before this, Chloe headed public policy at Intel, serving as Director of Public Policy, and progressing from her earlier roles as Public Policy Manager and Analyst. Chloe earned her B.A. in Economics from the University of California, Berkeley.

Check it out. For more information, previous podcasts, and full versions, visit our homepage.

To read more about cybersecurity topics, visit our blog.

#AI #artificialintelligence #deeplearning #machinelearning #syntheticdata #largelanguagemodels #LLMs

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

—–

If you are interested in emerging threats, new technologies, or best tips and practices in cybersecurity, please follow the InTechnology podcast on your favorite podcast platforms: Apple Podcast and Spotify.

Follow our host Camille @morhardt.

Learn more about Intel Cybersecurity and Intel Compute Life Cycle (CLA).