Skip to content
InTechnology Podcast

#67 – Thanksgiving Special: From AI to Data Security, Some Essential Lessons

In the spirit of Thanksgiving Tom and Camille highlight the work their guests are doing that they are thankful for — including ethical considerations of AI, why the race for AI is one of the most important for humankind, and how academia and the cyber security industry can work together.

 

The conversation covers:

  • Leading thoughts on AI
  • Ethical considerations of AI
  • Cyber security and digital manufacturing technologies
  • Why the relationship between academics and the cyber security industry matters

…and more. Don’t miss it!

 

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

 

Here are some key take-aways:

  • AI is one of the most important races in humankind right now, coming second will not be an option.
  • Human ethics needs to be taken into consideration when developing AI. AI is built on systems and structures in society. These systems have racist structures which means we need to be careful AI doesn’t perpetuate inequality.
  • Digital manufacturers are working on the ability to detect data hacks as they transmit data all over the world.
  • Academics and the data security industry need to make sure they are engaging with each other to understand future trends.

 

Some interesting quotes from today’s episode:

“When it comes to certain technologies like Artificial Intelligence, coming in second place can’t happen. You know, there’s such a first mover’s advantage. This is one of the reasons why Vladmir Putin said “whoever masters AI’s gonna master the world.”  So that race, yes, brings out the best in us, but in some cases, if we don’t win, it’s going to have an impact on our economy.” Will Hurd,  former Congressman and undercover CIA officer

 

“When we ask or think about, you know, who is this responsible to?  I think the first question is really where is the greatest impact going to be felt?  And to figure that out, I always start by asking or thinking about, you know, in which context will this technology we use be deployed? And who are the communities and users who might be impacted?.” Chloe Autio, Intel alumni and Advisor and Senior Manager, the Cantellus Group 

 

“The data security issues, the ability to sort of get in there and, and hack any of that and modify any of that is just sort of stop and stop and step back and think about that and you’re like, “Holy cow! There’s so many places this could go wrong now. Right. And how do I secure all of this?” Tim Simpson, Paul Morrow Professor of Engineering Design and Manufacturing at Penn State 

 

Links to full episodes with each guest:

Will Hurd: A Former CIA Officer and Congressman’s Thoughts on Cybersecurity, AI and More (Part 1)

Chloe Autio: What That Means with Camille: Responsible AI 

Tim Simpson: Ensuring Security in 3D Printing and Additive Manufacturing 

Jason Fung: What That Means with Camille: Offensive Research, aka Hacking  

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

[00:00:01] Announcer: You’re listening to Cyber Security Inside, a podcast focused on getting you up to speed on issues in cyber security, with engaging experts and stimulating conversations. To learn more, visit us at intel.com/cybersecurityinside.
[00:00:19] Teaser clips
[00:00:41] Tom Garrison: Thanks for joining us for this special edition of Cyber Security Inside. I’m Tom Garrison here with my co-host Camille Morhardt. And Camille, since this episode is coming out close to Thanksgiving time–at least here in the US–we thought it might be a nice opportunity to listen back to some of the conversations we’ve had this year on the podcast and highlight the work some of our guests are doing that we’re thankful for.

[00:01:04] Camille Morhardt: I like the idea. I was just thinking about how this podcast really started just a little over a year ago. And I’ve learned a tremendous amount listening to people and people really from like all different “comes from,” you know–everything from politicians to analysts, to architects, to product designers, to CISOs, privacy experts. Like we’ve heard it all. So I love the kind of multiple perspectives that we’ve been able to listen to over the year.

[00:01:37] Tom Garrison: You know, you, and I have talked about that actually, we’re in this really interesting perspective of listening to folks, just like analysts do in the industry. And, and so just like analysts go out and talk to customers and then kind of form their opinion around security, we’ve been able to do the same thing. And like you pointed. We’ll get some really interesting topics. And in fact, some of the topics, I mean, I probably shouldn’t admit this, but sometimes when I hear who we’re going to talk to that week, I think, “oh boy, you know, I wonder how interesting this is going to be?” I have been wildly surprised at some of our topics. Like they just catch me completely off guard.

[00:02:17] Camille Morhardt: Are you saying sometimes security can be boring, but it’s actually not. If you’re talking to the right people, it’s always interesting?

[00:02:23] Tom Garrison: That’s right. At first glance, you think, “oh my God, this is going to be really, really boring. There’s nothing interesting here.” And then you start talking about it and you write. “Well, that is really cool. I had never thought of it that way.” So, and I think that’s the beauty of this podcast really was when, what we dreamed up when we first started was that we wanted to take cyber security topics and bring them out in hopefully interesting ways. And I, for one, even though I wasn’t the intended audience, I feel like a year into it, I’m a whole lot smarter for myself on security and I hope the listeners are, as well.

[00:02:58] Camille Morhardt: And one of the things that I love is we’ll have equally qualified and erudite guests who will actually take really opposing perspectives on the same kind of a topic. So like for example, we talked with Alex Ionescu, formerly with CrowdStrike, about Artificial Intelligence and his take is, well, that’s a two word phrase that if you hear somebody trying to pitch, uou know, security management or endpoint management to you, and they use the words, artificial intelligence run screaming because for the most part, people are over-hyping it. They don’t really know what they’re doing. They’re trying to sell you something that is not really actually been proven out at this point. He does give examples of where it can be and where it does make sense.
But that in stark contrast with, uh, former Congressman Will Hurd who talks about how this is one of the most important races in humankind right now. And this is critical for American national security to quote unquote win in Artificial Intelligence.

[00:04:04] Tom Garrison: Yeah. I think that if I could try to square the two off, I think what Alex was saying was anybody who’s claiming that AI is here now, they’re overstating things there. They’re selling something that’s really not ready for prime time yet. And what Will I think is trying to say is not that it’s here yet, but boy, this is a race and it’s a race that we can’t afford to lose. So. I believe that actually, to be true, that it’s super important. And, and you know, you and I have been talking about where is AI going, um, which is kind of the billion dollar question.
And really, I think there’s so many exciting, positive examples of where AI can help, but on the negative side for AI, for bad, you know, things that people could use AI in malicious ways, the pain there could be extraordinarily high that–that people can use AI to be trained, to do things that humans would really struggle to be able to do and it’s scale a scale that humans could never do. And so protecting yourself against an AI attacker is a level of skill and capability that we don’t have today as an industry. And I think it’s an interesting way to think about it, that it is a race. It’s a race we can’t afford to lose. Um, and that’s certainly Will’s view.

[00:05:34] Will Hurd: I would say in this race with China on global leadership, the US can win because we can out innovate. Um, so yeah, uh, race is a good thing, but when it comes to certain technologies like Artificial Intelligence, coming in second place can’t happen. You know, there’s such a first mover’s advantage. This is one of the reasons why Vladmir Putin said “whoever masters AI’s gonna master of the world.” So that race, yes, it brings out the best in us, but in some cases, if we don’t win, it’s going to have impact on our economy. It’s going to impact on the dollar. Our savings are not going to be worth as much. When we actually retire or we’re not going to be able to purchase as many goods and services, you know, it’s going to have an impact on our way of life.
And then all those things that lead to that, what does the Chinese government care about 5G? Because 5G is going to really empower widespread use of Artificial Intelligence. Why are they trying to double down on semiconductor manufacturing? Because the compute power that’s necessary in order to achieve AI requires a whole lot of, uh, of semiconductors. Um, so these issues are interrelated in this broader race.

[00:06:46] Camille Morhardt: So what areas of Artificial Intelligence do you think are going to benefit or have the biggest effect on the American people?

[00:06:54] Will Hurd: You know, the, the, the real answer is, I don’t know. Right. And that, and that’s what makes this exciting. Uh, we’re already seeing artificial intelligence being used in a, a medical environment to diagnose cancers that the human eye hasn’t been able to do. You can look at your iris and determine a certain kind of cancer and you catch it, you know, months, if not years in advance, uh, which prolongs life. We’re seeing it being used in agriculture. So you’re able to, uh, use less land, use less water, but you’re, you’re increasing your yields, right? You’re saving energy. It’s pretty fantastic. There’s a lot of upsides. Right. Um, but then there’s also going to be downsides, like with any kind of technology, we have to make sure the, the ethics and ethical use of these tools. It starts with making sure AI follows the law. And so the, the, the issue around that is something you have to deal with. Facial recognition is always a topic that people have concerns with this. There’s a lot of, but, but we need to take advantage of technology for it takes advantage of us and the only way we’re going to be able to do that is if we have a public in the private sector, working together on these technologies and recognizing we’re in this race, because the Chinese government is pushing all of their factors of production in one direction in order to get there before we do.

[00:08:15] Camille Morhardt: That was former Congressman Will Hurd a guest on one of our episodes earlier this year. And, you know, Tom will talk about the debate in AI and that point reminded me of a What That Means conversation I had on Responsible AI. That was kind of the idea that emphasizes the need to think through the choices of what data we feed into AI algorithm, because we may inadvertently be using biased data as a result, get ourselves biased conclusions.

[00:08:45] Tom Garrison: That’s a very good point. The use of AI is so broad. It really touches all of our, our lives, or it could touch all aspects of our life. But that said, AI is only as good as the data used to train it. And so when you think about machine learning, it’s important to, first of all, look at the algorithms that are used to make sure that the algorithms are holistic in nature, but also that the data that we’re using to sort of feed the algorithm, think of it as like an animal, right. You got to feed it. Um, and you want to make sure that you give it a balanced diet of data so that it comes to a balanced conclusion.
You know, if you’re not careful, the AI will just come back with an answer that is based on the data you gave it, which could be biased, in any way. Right? It could be biased away from females as an example for, you know, however you’re using it or away from minorities; or it could be biased in terms of the numbers that it uses. It, it really doesn’t know what it’s doing. It just knows the algorithms and the algorithm will, will be heavily predicated based on the data you give it.

[00:10:08] Camille Morhardt: It will optimize, but, uh, it may not take into account ethical considerations as it does so.

[00:10:14] Tom Garrison: Sure. It, it doesn’t know ethics from an elephant. I mean, it just does what it’s told. And that’s where humans come into play. Right? Humans need to think about this in advance and make sure that they haven’t inadvertently created room for bias. Right. That’s really where the human beings come in.
[00:10:35] Camille Morhardt: And that’s exactly what Chloe Autio talked about on the show. She’s an Intel alumna, and she’s now an advisor and senior manager at the Cantellus Group.

[00:10:46] Chloe Autio: If we’re only thinking about one group of people who will feel the impacts of the technology. I think it really does a disservice to everyone else who may feel an impact along the AI lifecycle. So when we ask or think about, you know, who whom is this responsible to? I think the first question is really where is the greatest impact going to be felt? And to figure that out, I always start by asking or thinking about, you know, in which context will this technology we use are deployed? and who are the communities and users who might be impacted? So I start there, but at the same time, if more people are involved in the design, the development of these systems, thinking about the context in which they’re deployed and who might be impacted, I think more and more pathways or routes of impact can come to light. And, um, it’s really important. I think to explore all of those.

[00:11:38] Camille Morhardt: One question is, AI is doing a lot of work, kind of organizing and categorizing data for us right now. And it’s big migration is going to be to really interpreting that data on our behalves and providing us with some kind of view of it that maybe we hadn’t thought of before. What do you think we’re going to need to think about kind of foremost in the responsibility space?

[00:12:08] Chloe Autio: All of the assumptions that we are relying on and that this AI will be relying on, the AI making these decisions, all of these assumptions are built upon systems and structures in society that are totally not technology-related whatsoever. Institutions, power, systems that have sort of racist or white supremacist structures or an underlying structure. And making sure that we’re thinking about those structures and those systems as we are applying AI or allowing it to categorize, you know, interpret at, will be so important to not preserve some of that historical institutional structural bias that we’ve seen, you know, do things like perpetuate inequity, create imbalances in opportunity for certain people and communities throughout the world. So I think to answer your question more succinctly, when we’re thinking about responsibility in this space, as we move forward, we really need to think about and understand the past and how to make interventions and corrections to some of the structures and systems that have foundations that we, as a society, aren’t very proud of. Right.
So, um, I think, I think that’s the most important thing. And then I would say the second thing is making sure that as we’re enabling AI, to make these decisions is having a way to make sure that all of these diverse perspectives and knowledge streams are included. So, you know, researchers, people who work in civil society, you know, think tanks, policy, professionals, business folks, data scientists, ethicists, ethnographers, right? Social scientists. Like making sure that people who understand these structures and systems. Um, of the past are there, and part of the decision-making part of the teams and informing this AI and guiding it as it’s making these decisions when we move forward.

[00:14:07] Tom Garrison: That was Chloe Autio from a conversation earlier this year about Responsible AI.
Camille, as we continue on this special look back on the year of Cyber Security Inside, I gotta say, I’m thankful that we got a chance to talk with one of our guests, Tim Simpson.

[00:14:32] Camille Morhardt: Yeah. Tim Simpson is the Paul Morrow Professor of Engineering Design and Manufacturing at Penn State.

[00:14:39] Tom Garrison: I personally have always been intrigued with 3D printing, but for me, 3D printing was basically printing plastics, uh, where you could, you know, do a bobble head doll or something like that. And when we met with Tim, he completely changed my perspective of what 3D printing really was all about and then, almost like a gift, he really opened my eyes to the security challenges associated with 3D printing–something that I would have never thought about in this context.

[00:15:15] Camille Morhardt: And he has so much energy when he talks about it’s really fun to listen to him. You know, he is specifically doing research in additive manufacturing, but we talked about different kinds of digital manufacturing with him, including 3D printing. We also actually talked briefly about distributed manufacturing. But yeah, he was talking about printing helicopter parts and all kinds of things. Right. Real uses, not just bobbleheads.

[00:15:44] Tom Garrison: Yeah. To build on-the-spot, really high tech devices. And in fact, not just high-tech devices like helicopter parts, but devices that you cannot make any other way. And, and when you think about that, you realize very quickly that those recipes and the techniques that you use to build those parts are really important and you’ve got to make sure they haven’t been tampered with, and then your own livelihood depends on the fact that you keep them safe so other people don’t copy them. I think there’s a lot there to cover that Tim talks about. So let’s just listen to Tim in his own words.

[00:16:22] Tim Simpson: Additive as the first of a wave of sort of digital manufacturing technologies and that’s everything from the part that we’re designing to the tooling that we’re using, to the process plan, post-processing and inspection. Now, everything is digital creates all sorts of new challenges. And how do you make sure all of those steps are secure? Right. So how would, you know, if somebody hacked your part file? how would you know if you were transmit instructions to a, to another distribution center around the world to make your part, how would you know if those got hacked? Uh, it’s challenging. I think it’s quite an open issue, in fact.
And we’ve seen some studies with consumer desktop 3D printers, the studies of taking you could take a cell phone and just record the motors, sort of worrying and whizzing around and from that, you know, recreate what it printed with about 80 to 90% accuracy. So now talk about espionage and counterfeiting of, of a part. I could just be sitting there holding my phone next to a printer, getting the beeps and boops and whirrs and turn around and do that.
Like our lab, for instance, we actually, our machines are not on the network. You know, you’re transferring on a file, but then when the is running, you cannot get to it or access it. Right. But take a machine shop or a job shop, if they’re plugging in a 3D printer now to connect to the Internet, now, you’ve got a new entry point for cyber attacks.

[00:17:48] Tom Garrison: Are there specific, you know, security approaches that we’re doing for additive in this industry? Is there something that’s more kind of unique or does this fit more into data security in general?

[00:18:02] Tim Simpson: I think there are certainly elements of data security in general there, but I think people are now just realizing additive is sort of grown up sort of on its own island, so to speak. Recognizing, “Hey, we’ve got all these issues” and we’re just now at the point of looking around and saying, “Hey, what do I, what can I learn from the data security experts? from the cyber security experts? How do I use blockchain to do this? How do I secure and encrypt my files? How do I do that for my part? How do I do it for my process? And how do I do it for all the data that’s coming off of that that I want to use for quality control or eventually qualification and certification. If I’ve got a medical device or a, or an aerospace component.
We tend to think about from an engineering standpoint, you know, it’s materials and process, but all of that, now the data security issues, the ability to sort of get in there and, and hack any of that and modify any of that is just sort of stop and stop and step back and think about that and you’re like, “Holy cow! There’s so many places this could go wrong now. Right. And how do I secure all of this?”

[00:19:11] Tom Garrison: I love that, but holy cow is right. Again, that was Tim Simpson and 3D is an area we’ll keep an eye on because as additive manufacturing continues to find more and more uses, securing those designs, the process and everything else involved is going to become even more important.

Camille Morhardt: Definitely,

Tom Garrison: You know, our conversation with Tim also bridges over into a lot of the cyber security work we do, in general. And that’s really about how we engage with researchers at universities. We’ve talked in several different episodes around Bug Bounties, which is how we engage with researchers to incentivize them to do research on our platforms in like a crowd-sourced security. In these cases, people often in academia, they let us know when they find these vulnerabilities in products, and then they work with us to make these improvements in the products. And then we together go out to the industry, talk about those vulnerabilities and make sure that we educate people on updating their systems.

[00:20:29] Camille Morhardt: Yeah. And I really did enjoy the conversation that we had with Jason Fung. He’s Director of Offensive Security Research at Intel, as well as Academic Research Engagement. So basically we’re looking at Offensive Security, being the hackers and research, being what they’re looking at and what they’re going and figuring out. And Jason’s done, like you said, engaging with academics in all kinds of innovative ways. I mean, you talked about the bug bounty program, but you know, we also do capture the flag, so really interesting stuff that really was put together when we, as industry sit down with academia or run into each other and realize we’re not talking enough. For either academics or industry to really understand the future trends and where things are moving and the kind of technology that we’re going to need to have and how to look at vulnerabilities, we have to be working together closely and exploring research and topics together closely. And exploring research topics together closely.

[00:21:38] Tom Garrison: That’s true. I mean, when you think about. Uh, security research in general, actually any research, not just security research, you are by definition on the cutting edge of knowledge. And that can be a pretty lonely place. If you’re not careful, you can also kind of spin off into areas that may not be as fruitful. And so having a tight coordination with researchers to gather the best of their ideas, coupled with where we see from a business standpoint where things are going, um, that’s really the magic when, when the two sides talk closely and we can engage together.

[00:22:19] Camille Morhardt: Yeah, it’s really interesting. And the, the motivations are, you know, different often for people in industry and people in academia. So that also plays into it. You know, you’ve got to create relationships that are mutually beneficial and they may be sometimes at first glance, diametrically opposed and objective.
So, you know, Jason talks about that too. How do you sit down together? When at first glance you, you may actually be coming at something from opposing viewpoints.

[00:22:40] Jason Fung: So I remember one time that we actually submit a paper for a really great conference and we thought, “oh yeah, we are solving a big problem. And they should like our paper.” At the end when we receive, feedback, they basically ask “what makes you feel that this issue that you’re solving is the top issue to them?” They haven’t heard about the issue because we failed as an industry to tell them about what the biggest problem.

Camille Morhardt: In academia and actually also an industry we end up in silos and then you miss you have big gaps, whereas if you ended up working together, You know, it might just take a lunch together to figure out,” oh my gosh. You know, I hadn’t thought of that angle.”

Jason Fung: Yeah. So one thing that we also do is we try to create these opportunities in a more intentional manner.

Camille Morhardt: Well, let me ask you one thing. I think you started a capture the flag in hardware. Can you explain what that is?

Jason Fung: So at that time we thought about, “Hey, how can we make this a journey, the awareness building journey to be even more fun and more hands-on.” So we partner with our researcher partners in academics, from Germany and from also US. And we pulled together this, uh, hardware capture the flat competition. We showcase all the common weaknesses we are aware of in the open source SOC, but we embedded these vulnerabilities into a big pool of RTL code. And then we open it up for the competitors to find them, and we give them 48 hours straight non-stop actions in the conference setting and invite teams of, uh, maybe three or four joining together and find as many issues as possible.
One thing that they walked away with is that first, “oh, this is what you mean by having this issues” because they get to see them. They got to play around with them. Second is that they also understand the challenges being encountered by our verification team, because with a very short period of time that you have to verify your RTL before it got released as a product, we create that a 48 hour window that has so many bugs inside the RTL that you need to find the more, the better. They understand, “now I really need tooling. I really need that fantastic methodology to help me understand the RTL, find all the issues and be able to report back. So that also bring that awareness to the attendees so that hopefully they will be inspired to work on the research discipline related to hardware security in a more intentional manner, relevant to the hardware industry’s problems.

[00:25:26] Camille Morhardt: I’m so glad that you’re working on this and focused on it. And I think it’s really interesting that deep partnerships occur actually between industry and academia and what we, you know, kind of traditionally think of as hackers, but putting a different spin on it, thinking of the good side of that and how it’s helping evolve technology.

[00:25:48] Tom Garrison: Camille spoke with Jason Fung, Director of Offensive Security Research and Academic Research Engagement at Intel.

[00:26:02] Camille Morhardt: If you want to listen to the conversations we featured today in their entirety, we’ll have links in the show notes, so you can find them easily.

[00:26:10] Tom Garrison: Camille, and I are always looking out for great topics for the podcast. So if you have some topics that you would love to see us dive deeper into, I encourage you to reach out to us on LinkedIn.

[00:26:26] Camille Morhardt: Yeah, and I’ll add to that, too; if there’s definitions that you want to know more about and you want them from top technical experts, uh, also reach out on LinkedIn. We’ve got a lineup already for What That Means in 2022 of Metaverse, Trans-human, Cybernetics, Robotics, digital identity. So anything else like that, that you want definitions for? Let us know, and we’ll try to get them in the queue.

[00:26:52] Tom Garrison: And stay tuned because at the end of December, we’ll have another look back over the year at Cyber Security Inside. In that episode, we’ll be focusing on strategies we learned from some of our guests in how to prevent cyber attacks on your devices, computers, and networks. So should be fun.

[00:27:16] Announcer: Stay tuned for the next episode of Cyber Security Inside. Follow @TomMGarrison and Camille @Morhardt on Twitter to continue the conversation. Thanks for listening.
The views and opinions expressed are those of the guests and author, and do not necessarily reflect the official policy or position of Intel Corporation.

Will Hurd: P1: https://cybersecurityinside.libsyn.com/37-a-former-cia-officer-and-congressmans-thoughts-on-cyber-security-ai-and-more-part-1

Chloe Autio: https://cybersecurityinside.libsyn.com/51-what-that-means-with-camille-responsible-ai

Tim Simpson: https://cybersecurityinside.libsyn.com/17-ensuring-security-in-3d-printing-and-additive-manufacturing

Jason Fung: https://cybersecurityinside.libsyn.com/56-what-that-means-with-camille-offensive-security-research-aka-hacking

More From