Skip to content

Can You Spot a Deep Fake? Detection, Generation, and Authentication | Intel Technology

Play Video

Could you confidently tell the difference between a deep fake and a real video?

In this What That Means video, Camille talks with Ilke Demir, Senior Staff Researcher at Intel Labs. They get into deep fake detection methods, responsible deep fake creation, and new content authentication systems.

Detecting Deep Fakes

Deep fake generation and deep fake detection have both become rapidly advanced since their origins with GANs (generative adversarial networks) in 2014. Ilke shares how her deep fake detection system FakeCatcher uses PPG (photoplethysmography) signals to detect natural color changes in human veins due to heart rate to determine if a video is real or a deep fake. This isn’t the only method out there, however. Another example Ilke provides is eye gaze-based detection, which analyzes the differences between natural human eye movement and synthetic humans in deep fake content.

Responsibly Creating Deep Fakes

Bad fakes are renowned for spreading misinformation, but they can also be used for creative and well-intentioned purposes. Ilke shares how it’s possible to responsibly create deep fakes, with a story of successful deep fake video creation for an internal Intel video project. The main differences between responsibly and maliciously generated deep fakes are consent and intent.

How to Authenticate Digital Content

A major question surrounding deep fakes and digital content is how we can know with certainty what’s real or fake without using advanced deep fake detection programs. Ilke provides a few approaches to this problem, including tracking at the hardware and software level to crypto-based and blockchain systems.

These solutions are founded on the idea of media provenance, which is the tracking of a piece of media’s origin and creation process. C2PA (Coalition for Content Protection and Authentication) is a group actively working to create standards and policies for this purpose. With the concern of deep fakes being used to share misinformation in emergency or political situations, it’s more important now than ever to streamline these types of authentication.

Ilke Demir, Intel Labs Senior Staff Researcher

Ilke Demir deep fake

A leader in deep fake research, Ilke Demir’s research also includes 3D vision, computational geometry, generative models, remote sensing, and deep learning. Her research is backed by a Ph.D. in Computer Science from Purdue University. Prior to Intel Labs, Ilke worked with Pixar Animation Studios, Facebook, and the Tesla-acquired startup DeepScale. She is the developer of FakeCatcher alongside Umur Aybars Ciftci.

Check it out. For more information, previous podcasts, and full versions, visit our homepage.

To read more about cybersecurity topics, visit our blog.

#deepfake #fakecatcher #mediaprovenance

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

—–

If you are interested in emerging threats, new technologies, or best tips and practices in cybersecurity, please follow the InTechnology podcast on your favorite podcast platforms: Apple Podcast and Spotify.

Follow our hosts Tom Garrison @tommgarrison and Camille @morhardt.

Learn more about Intel Cybersecurity and Intel Compute Life Cycle (CLA).