Skip to content
InTechnology Podcast

The Hottest Cybersecurity Topics of 2023 (183)

In this episode of InTechnology, Camille explores the most popular cybersecurity topics among our listeners in 2023. Camille kicks things off with her and Tom’s conversation with Jorge Myszne, Co-Founder of Kameleon, about Root of Trust and firmware attacks. This is followed up by a discussion on confidential computing with guest Mark Russinovich, Technical Fellow and CTO of Microsoft Azure, and episode co-host Anil Rao, a VP and GM at Intel. Then, Camille wraps things up with a look at AI and deep fakes with Ilke Demir, Senior Staff Research Scientist at Intel Labs and a creator of FakeCatcher.

To find the transcription of this podcast, scroll to the bottom of the page.

To find more episodes of InTechnology, visit our homepage. To read more about cybersecurity, sustainability, and technology topics, visit our blog.

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Follow our hosts Tom Garrison @tommgarrison and Camille @morhardt.

Learn more about Intel Cybersecurity and the Intel Compute Life Cycle (CLA).

Root of Trust and the Rise of Firmware Attacks (136)

Jorge shares in these clips how prevalent and potentially damaging firmware attacks have become, as well as how to prevent them with Root of Trust practices. Firmware attacks are different from a virus on your operating system. They’re attacks embedded into the physical device itself before the operating system itself, and they can be very expensive to fix. Jorge explains how Root of Trust works to protect against these attacks by verifying that everything is correct and authentic before operating. He says this process starts in the supply chain so that anything tampering will be detected even before the system turns on, and a manifest will verify the information.

Listen to the full episode here.

Azure CTO Talks Confidential Computing and Confidential AI (171)

Mark defines confidential computing for listeners as the use of hardware to create enclaves or containers where code and data can be protected while in use. This is different from how data has previously been protected when only at rest. He also notes that confidential computing allows for attestation of what’s inside the container. Mark and Anil then touch on some of the latest developments in confidential computing, including confidential computing elements in Azure, confidential Databricks, Intel TDX, and Intel Trust Authority.

Listen to the full episode here.

What That Means with Camille: Deep Fake (135)

Ilke explains how researchers are developing tools to differentiate between deep fakes and real content, including FakeCatcher. This process requires training a powerful network on deep fakes and reals so that it can accurately identify the difference. However, what makes FakeCatcher so unique is that Ilke and her team asked what makes humans unique rather than what makes a video fake. One authenticity signature in humans is PPG signals, which are the computationally visible color changes in human veins due to the heart pumping blood. Another signature is eye gaze. Ilke also gets into the current work on media provenance to know how and when a piece of media was created.

Listen to the full episode here.

Jorge Myszne, Co-Founder of Kameleon

 

Jorge Myszne root of trust

Jorge Myszne co-founded Kameleon in 2018. He earned an M.S. in Electrical, Electronics, and Communications Engineering at Universidad de la República in Uruguay and has gone on to have a successful entrepreneurial career in the semiconductor, communications, and security industries. Jorge even has a history with Intel as a previous Senior Systems Engineer and Manager in Israel from 2000-2007.

Mark Russinovich, Microsoft Technical Fellow and CTO of Microsoft Azure

Mark Russinovich confidential computing Intel Trust Authority confidential AI

Mark Russinovich has been the Chief Technology Officer of Microsoft Azure since 2014 and a Technical Fellow at Microsoft since 2006. Prior to Microsoft, he was Co-Founder and Chief Software Architect at Winternals Software, a Research Staff Member at IBM, and a software developer. He holds a Ph.D. and a Bachelor’s degree in Computer Engineering from Carnegie Mellon University and a Master’s Degree in Computer and Systems Engineering from Rensselaer Polytechnic Institute. Mark is also the author of the sci-fi novels Zero Day, Trojan Horse, and Rogue Code.

Anil Rao, VP and GM at Intel

Anil Rao confidential computing Intel Trust Authority confidential AI

Anil Rao is a Vice President and General Manager at Intel. Previously, he served as Vice President and General Manager of Systems Architecture and Engineering in the Office of the CTO at Intel. Anil co-founded SeaMicro in 2007, and after its 2012 acquisition by AMD, served as VP of products in AMD’s Data Center Group for three years. Prior to Intel, he consulted for Qualcomm’s CTO Office. Anil holds a bachelor’s degree in electrical and communications engineering from Bangalore University, a master’s degree in computer science from Arizona State University, and an MBA degree from the University of California, Berkeley. He has additionally co-authored the Optical Internetworking Forum’s OIF specifications and holds many patents in networking and data center technologies.

Ilke Demir, Senior Staff Research Scientist at Intel Labs

Ilke Demir deep fake

Ilke Demir is a rising leader in the world of deep fake research. Beyond deep fakes, Ilke’s research focuses on 3D vision, computational geometry, generative models, remote sensing, and deep learning. She earned a Ph.D. in Computer Science from Purdue University in 2016, where she also completed a master’s in the same discipline. Before joining Intel Labs in 2020, Ilke worked with Pixar, Facebook, and the startup DeepScale, which was acquired by Tesla. Ilke’s research at Intel Labs has involved the creation of FakeCatcher with fellow researcher Umur Aybars Ciftci.

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

Camille Morhardt  00:13

Hi, I’m Camille Morhardt, host of InTechnology podcast.  Thanks for joining me as we take a listen and look back at some listener favorite topics from this year.  For this episode we’re gonna focus on cybersecurity.  And our listeners were curious this year about several facets in cyber security—from firmware attacks to deep fakes.

Our conversation back in January with Jorge Myszne drew a lot of interest.  Jorge is the co-founder of Chameleon, which is a hardware security start-up. Former co-host Tom Garrison and I invited him to the podcast to talk about firmware attacks.  And Jorge told us that in the last three-to-four years, there’s been a five-fold increase.

Jorge Myszne  00:53

It’s very difficult, one, to detect it, but two, it doesn’t disappear. It’s persistent, right? You turn off your computer, you turn it on and it will load the same malicious firmware again.  And if you are a smart attacker, you will disable the update so you can’t change it. So then physically you need to go and remove the flash or do something physically. So it’s very expensive. It’s not just having an antivirus and okay, let’s update the antivirus database. Right? It doesn’t work that way. And if you have it in a server, in a data center with 100,000 servers and you need to find that server and physically go and open it up and do something. That is very, very expensive.

The challenge is that the firmware is loaded before the operating system exists.  So all the tools, all the security tools that we use, let’s call it, for example, an antivirus, it’s loaded after the operating system. It’s an application or kind of an application, right? So it happened after the firmware is loaded. So if the firmware is malicious, it’s very difficult for an application to understand that that firmware is malicious. So that’s why we need the root of trust. We need to stop that when it’s happening. And when it’s happening, there is no software. So it has to be done in hardware. We need to have a device in the system that is the root of trust that is in charge of verifying that that is correct, that is authentic.

Tom Garrison  02:32

How do companies protect themself against these attacks?

Jorge Myszne  02:38

Before we get into the system, actually the protection starts in the supply chain. And there are things that we do while the root of trust is added into server motherboard to bind it to the memory so be able to detect if someone changes things. Right? So we try to connect everything in a way that we can detect any changes that happen even before the system is turned on. And then when the system is turned on, we have a manifest, we have a database, we know what it should be there and we need to verify that what we have is there. And eventually we also can interrogate peripherals that are on the system and verify that those peripherals that boot by themselves are also original and authentic.

Camille Morhardt  03:29

Jorge Myszne is co-founder of Chameleon, a hardware security startup.

Our next listener favorite also touched on root of trust, but in this case, trust that your data is secure when you’re using it in the cloud.  This is known as confidential computing.  And we had the perfect expert, Mark Russinovich, Technical Fellow as well as Chief Technology Officer at Microsoft Azure to explain.

Mark Russinovich  03:58
So confidential computing is the use of hardware to create enclaves or computational containers, where code and data can be protected while it’s in use. And that’s in contrast to the kinds of protections we’ve had up to now which are protecting data at-rest, with encryption at-rest, protecting on the wire with for example, TLS.

And there’s another important aspect to confidential computing, this definition, which is not just protecting that code and data from external access and tampering, but also being able to attest to what’s inside of the container.  

Camille Morhardt   04:36

Mark Russinovich spoke with me and guest co-host Anil Rao—an Intel Vice President and General Manager– about some of the leaps made this year toward confidential computing in the cloud.

Mark Russinovich  04:47
We are on the verge of really removing the last kind of caveats on confidential computing to make it ubiquitous. And so Microsoft’s goal, with support at Intel, is to aim for a confidential cloud and that means that our PaaS services will all be confidential, and have that extra layer of defense in-depth, that customers can protect their own workloads with very high degrees of policy controls, and assurances that their data is being protected end to end, regardless of what kind of computations they’re going to perform–AI, ML, data analytics or their own data processing on them.  We’ve got confidential virtual machines that allow us to, for example, have confidential virtual desktop in Azure are confidential Kubernites nodes in Azure. And we’re moving to flesh out the rest of that environment of confidential containers, confidential PaaS services. And in fact, we’ve got confidential Databricks we’ve announced in partnership with Databricks.

So this foundation, pieces of landing into place, the barriers to adopting confidential computing are falling by the way; we’ve got confidential GPUs now with working with you, we’ve got TDX Connect to allow complete protection between a CPU and an accelerated device like a GPU. Things are landing in place, and we’re about to enter the phase of, hey, now, the reason will be “why can’t I do competency computing?” It would be why am I not doing confidential computing on the edge?

Anil Rao  06:13
So Mark, one of the things that we know is that, with AI getting so pervasive and data kind of like flowing in so many different areas, we see that training is going to be done most often in a cloud kind of like an environment—not to say that inference is not happening there. But then the models get distributed data gets distributed and inference might happen at the edge, or even like incremental training that happen in something like an edge environment. So given this to be the holistic scenario, what are your thoughts on a SaaS service, like Intel Trust Authority? And what role does it play in order to provide assurance of security for those AI models that may float anywhere from cloud to edge to potentially even devices?

Mark Russinovich  07:05
Yeah, well so a key part of confidential computing, like I mentioned, is attestation, and the verification of the claims that come from the hardware about what’s inside of the Enclave. For somebody that is saying, “Can I trust this thing to release data to it? Or do I trust its answers coming back? And basically, do I trust that is being protected by confidential computing hardware, like TDX, for example, or a confidential GPU?” That attestation report carries a lot of information that’s complex to reason over and come up with a valid “Yes, this is something I trust.” Not only that, but there can be configuration that is part of the attestation report that needs to also be looked at.

And then typically, there’s some policy of “I’ll trust things that are these versions, and that have this configuration, and I won’t trust anything else.” And so for something that is going to establish trust in the enclave or the GPU, it actually simplifies things tremendously if you can offload the complexity of that policy evaluation and verifying that the hardware claims are actually valid and signed by Intel, for example, to an attestation service that does that a complex processing and reasoning and policy evaluation. And so that’s exactly what Intel Trust Authority is, is a system attestation service at the core of it, which takes those claims the relying party, somebody that wants to see if I can trust something can rely on the trust authority to say, “Yeah, this meets the policies that you’ve got and it is valid, confidential computing hardware that is protecting this piece of code, so you can trust releasing secrets to it.”

Camille Morhardt  09:11

That was Mark Russinovich, Technical Fellow at CTO at Microsoft Azure. Our wide-ranging conversation with Mark also discussed data sovereignty, confidential AI, and much more.  To listen to the entire episode, which I recommend, click on the link in the show notes.

In the last of our listener favorites on cyber security, we move away data protection, to identity protection.  With the advent of more and more powerful AI tools, there have been more concerns about “fake” images, fake videos – commonly known as “deep fakes.”  They look and sound authentic, but they aren’t.

In January, I spoke with Ilke Demir, who’s a Senior Staff Researcher at Intel Labs about advances in detecting deep fakes.  Ilke is actually one of the creators of a tool designed to pick out the real from the fake in video—now called Deep Fake Detection as a Service using Fake Catcher.  We reference it simply as Fake Catcher during this conversation.

Ilke Demir  09:52

Researchers first introduced methods that are looking at artifacts of fakery in a blind way. So the idea is if we train a powerful network on enough data of fakes and reals, it will at some point learn to distinguish between fakes and reals because there are boundary artifacts, symmetry artifacts, et cetera. Well, that is a way of course, and it’s working for some cases, but mostly those are very open to adversarial attacks that have a tendency to over fit to the datasets that they are generated on, and they’re not really open for domain transfer or open for generalization capability of those detectors.

We twisted that question. Instead of asking what is the artifacts of fakery or what is wrong with the video, we ask what is unique in humans? Are there any authenticity signatures in humans as watermark of being human. Following that kind of thought, we have many different detectors that are looking at authenticity signatures.

Fake Catcher is the first one. We are looking at your heart rate basically. So when your heart pumps blood, it goes to your veins and the veins change color based on the oxygen they are containing. That color change is of course not visible to us humans. We don’t look at the video and say, “Oh yeah, she’s changing color.” We don’t do that. But computationally it is visible and those are called photoplethysmography, PPG signals. So we take those PPG signals from many places on your face, create PPG maps from their temporal spectral and spatial correlations, and then train the neural network on top of PPG maps to enable deep fake detection.

We also have other approaches like eye gaze-based detection. So normally humans, when we look at the point, they converge on a point; but for deep fakes it’s like googly eyes. Of course not as visible, but they are less correlated, et cetera. So we collect all the size, area, color, gaze direction, 3D gaze points, all those information from eyes and gazes and train a deep neural network on those gaze signatures to detect whether they’re fake or not.

Camille Morhardt  11:55

So will there be ultimately then some kind of movement toward establishing provenance when videos are made or by provenance, I mean the origin can be proved somehow or tested somehow as the true source?

Ilke Demir  12:10

Exactly. You’re just on that point. I was about to say that. So of course there’s detection as a short term, but for long term there’s media provenance research that is going on and media provenance is knowing how a piece of media was created, who created it, why it was created, was it created with consent? Then throughout the life of media, was there any edits? Who made the edits? Was edits allowed? All of the life of media and what happened to it, will be stored in that provenance information and because of that provenance information, we will be able to believe what we see, saying that, “Okay, we know the source, we know the edited story, et cetera, so this is a legit piece of media, which is original or fake.  Because there are so many creative people like visual artists, like studios, and those have been creating synthetic media and synthetic data through their lives, so we also want to enable that.

Camille Morhardt  13:06

Ilke Demir, Senior Staff Researcher at Intel Labs, talking with me back in January about detecting deep fakes.  If you want to hear more of the episodes we highlighted today, you’ll find links to each in the show notes for this episode.  And as always, you can find InTechnology podcast on your favorite listening platform or at our website intechnology.intel.com.

In a few weeks we’ll have the last of our 2023 listener favorites episodes for you and that one will go in-depth on AI topics from this year.  Think Generative AI, synthetic data, learning language models and more.  Don’t miss it!

More From