In this episode of InTechnology, Camille gets into confidential computing and Intel® Trust Authority with Mark Russinovich, Technical Fellow and CTO of Microsoft Azure, and Anil Rao, VP and GM of Systems Architecture and Engineering in the Office of the CTO at Intel. The conversation covers the definition, uses, and benefits of confidential computing. They also explore topics in artificial intelligence like confidential AI, the democratization of AI, and the potential future risks of AI.
To find the transcription of this podcast, scroll to the bottom of the page.
The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.
Making Confidential Computing Just Computing
Mark and Anil give Camille their definitions of confidential computing, which essentially boil down to using hardware to create enclaves where code and data can be protected while it’s in use, while also being able to attest to what’s inside of the container. From everyday workloads to more sophisticated workloads like large AI models, there are many applications for confidential computing to protect IP and data processing. As processing moves more from the cloud to the edge with hybrid AI, confidential computing provides extra protection to edge environments that may not be as physically secure as large data centers. That’s where SaaS services like Intel® Trust Authority come in as a secure third-party attestation service to take complex attestation reports and simplify the process, like how it’s being used with Microsoft Azure. Mark and Anil also discuss further benefits of confidential computing, such as its applications to data sovereignty and code transparency.
The Future of Artificial Intelligence: Confidential AI and the Democratization of AI
With confidential computing comes confidential AI, and both are soon expected to be everywhere. Anil defines confidential AI as any model running inside a trusted and encrypted execution environment. At the same time, the democratization of AI is making large-scale infrastructure to train models and the models themselves more accessible. Mark notes the open-source examples of GPT4, OpenAI, and Llama 2 that Microsoft is making available to customers. Because so much of AI is relatively new, there is a rising need to set some regulations so that it is handled responsibly like safety controls and removing bias from models. The key element will be to make sure basic regulations are in place without stifling innovation.
Mark Russinovich, Microsoft Technical Fellow and CTO of Microsoft Azure
Mark Russinovich has been the Chief Technology Officer of Microsoft Azure since 2014 and a Technical Fellow at Microsoft since 2006. Prior to Microsoft, he was Co-Founder and Chief Software Architect at Winternals Software, a Research Staff Member at IBM, and a software developer. He holds a Ph.D. and a Bachelor’s degree in Computer Engineering from Carnegie Mellon University and a Master’s Degree in Computer and Systems Engineering from Rensselaer Polytechnic Institute. Mark is also the author of the sci-fi novels Zero Day, Trojan Horse, and Rogue Code.
Anil Rao, VP and GM of Systems Architecture and Engineering, Office of the CTO at Intel
Anil Rao has been Vice President and General Manager of Systems Architecture and Engineering in the Office of the CTO at Intel since 2016. Anil co-founded SeaMicro in 2007, and after its 2012 acquisition by AMD, served as VP of products in AMD’s Data Center Group for three years. Prior to Intel, he consulted for Qualcomm’s CTO Office. Anil holds a bachelor’s degree in electrical and communications engineering from Bangalore University, a master’s degree in computer science from Arizona State University, and an MBA degree from the University of California, Berkeley. He has additionally co-authored the Optical Internetworking Forum’s OIF specifications and holds many patents in networking and data center technologies.