Ep57- WTM Threat Modeling
Note: Camille uses “Johnny,” but I went ahead with Jonathan with the transcript, because that reflects what’s in Microsoft Teams.
[00:00:38] Camille Morhardt: Hi. On today’s show we’ll be discussing threat modeling. Joining me today are Jonathan Valamehr and Dina Treves. Jonathan or Johnny is a Principal Engineer in the Security Architecture and Engineering Group at Intel. He’s a seasoned hardware security researcher who has worked both in academia and in industry. He holds several patents and has authored dozens of publications in the areas of hardware security and computer architecture. He has a Bachelor’s, Master’s and PhD degrees in electrical and computer engineering. All from UC Santa Barbara. Ole!
Dina is Architect and Silicon Design Engineer of WiFi Client Solutions at Intel. She’s an expert in control flow and memory system architecture and design. She is also a patent holder and a published author in those areas. She has a BS in electrical engineering from the Technion, the Israel institution of technology. And she’s developed a hardware threat modeling workshop that she delivers in Intel security conferences and trainings worldwide.
Welcome both of you to the show.
Jonathan Valamehr: Thanks, Camille.
Dina Treves: Great to be here.
[00:01:43] Camille Morhardt: Dina, let’s start with you. Can you define threat modeling and under three minutes?
[00:01:47] Dina Treves: Yes. In order to understand threat modeling, we need to understand three basic terms. One of them is an asset, which is something that we care about and we want to protect. Let’s consider medical information, for example, we want to keep it confidential. Since this is private information and we don’t want it to be available for everybody to read. We also want to make sure that no one can change it unless they’re authorized to do so, because if you’re tampering with information such as deleting allergy information or modifying medication dosage, this may result in life-threatening scenarios and we want to have it available. If you remember, uh, WannaCry ransomware that was in 2017. Britain’s hospitals were severely impacted by it and patients were redirected to other hospitals for medical care and diagnostic equipment wasn’t available like MRIs. So this was resulting in threats to patient safety.
Now we need to consider adversary. An adversary someone that has skills and motive to get to our asset–either to find out information, damage it, or deny us the service. And attack surface, which is some sort of opening that the attacker will use to get into our system. We can visualize it as a door or a window to our system.
Threat modeling is basically taking those three elements and start playing what “what if?” And when we’ve come up with all kinds of what ifs we start thinking of mitigation, how to protect our system.
[00:03:25] Camille Morhardt: So can you say one more time? What are the three things just to kind of close on the definition?
[00:03:30] Dina Treves: The three things are assets–this thing that we care about and want to protect; adversaries–someone that has skills and motives and want to get to our assets; and attack surface, which would be the way to get into our assets, this opening, this hole that helps the adversary get into the asset.
[00:03:50] Camille Morhardt: And are adversaries sometimes inadvertent? or are we talking about ill intentioned people, specifically?
[00:03:57] Dina Treves: Well, usually when we talk about adversaries, the word itself means about ill intention, but sometimes things can happen by accident that database can leak because of some sort of problem vulnerability, weakness in the security, and it’s available out there. Nobody meant for it to happen, but it’s just there. And then people can access it and get to the information and use it.
[00:04:24] Camille Morhardt: Okay. That’s great. Thank you for the definition. Let’s dive a little deeper.
One of the things that you were alluding to, I think, is that threats can evolve along with technology. So sometimes threats might exist when you’re designing a product, initially, and then once the product is shipped, the threat landscape evolves and then either because the product was used in a different way than it was originally intended, or because nobody had figured out the kind of attacks when it was initially designed, it’s not protected anymore.
One thing I want to ask is Johnny does threat modeling differ depending on use case or depending on product?
[00:05:12] Jonathan Valamehr: Yes, absolutely. Camille, it does. So that’s really the entire purpose of threat modeling. That’s really the art of threat modeling. And so the threat modeling activity is very dependent a system’s use case, the system’s assets as Dina mentioned; where the system will be deployed. Uh, who will be using it, what will be connected to so on and so forth. And so using this information allows the individuals who are developing the threat model to place threats as in scope or out of scope. When this threat is in scope, that means that’s something that we want to protect against that we think may actually happen out in the wild. When something is out of scope, we’re typically not going to place any attention to it and try to fix it–whether that’s because it’s likely not to happen or whether that is something that is not in the use case and whatnot. So that’s really, the first thing you do is you place threats as in-scope or out-of-scope, and then you prioritize those threats accordingly.
And so every system is different and used for different purpose. Thus every threat model is unique and deserves its own diligence and attention. That’s really the, I would say the art of threat modeling and one thing that a lot of people may think is “why don’t we just be aggressive and add all threats to in scope for a threat model?” But that would create unnecessary work and sort of distract for protecting against the most relevant attacks.
Um, so really the process of threat modeling is figuring out which threats or attacks are in scope and you want to protect against, and then creating a priority amongst those attacks. That’s the main meat of it.
[00:06:36] Camille Morhardt: I mean, that makes sense to me because you know, lightning storm could constitute some sort of a threat, but maybe that’s something pretty out of your control or maybe it’s something you absolutely want to account for just depending.
Jonathan Valamehr: That’s right. That’s right.
[00:06:48] Camille Morhardt: I mean, that sounds difficult than in application, because I would imagine a lot of institutions or organizations would like to have some sort of standards across their threat modeling. Is there a way to do that, if you’re saying every situation is different and unique?
[00:07:04] Jonathan Valamehr: So there are, uh, I would say sort of groups of attacks or sort of groups of threats that are relevant for different kinds of let’s say systems like I server that’s found in a warehouse that’s protected with armed guards versus an IOT device that’s on the edge that’s maybe sitting on a light pole that someone could get access to; or whether it’s a laptop that’s sitting in somebody’s home. So these are all different places that assistant can be and so the threat model has to depend on basically what the use cases are and where it’s going to be deployed?
[00:07:36] Camille Morhardt: So how do you know–just to follow onto that– how do you know what threats are out there? I mean, if you’re talking about everything from political upheaval to whether to earthquakes, including individual adversaries, how, how are you supposed to even scope that?
[00:07:53] Jonathan Valamehr: I think to start, we have organizations that have internal research teams that identify vulnerabilities and publish their work for the different design teams to review. Uh, and this can be great because for any organization, it helps the design teams quickly learn and update their systems potentially without the risk of exploitation.
This can be, if we’re talking about the chip world, uh, pre-silicon. So this is, could be during the design stages where, uh, internal research team can find something, or this could be post-silicon, when the chips are made and the research teams find something. You know, post-silicon it can also be mitigated as well, but of course it’s easier to pre-silicon.
Um, so yeah, we start with the internal research teams and then next we have the, I would say academic or otherwise external researchers who find different vulnerabilities or weaknesses and publish their findings either online or in forums or conferences and these sorts of things, then that’s pretty common. So a lot of university students and professors, a lot of different security companies they’ll look for these vulnerabilities either to gain some notoriety or just to publish some papers. So that’s another way we find out about threats.
And we may also find about threats. Uh, once a malicious actor has launched an exploit and then it is sort of detected in the wild. So that’s another way. So this entire ecosystem of internal researchers, external researchers, malicious actors, and different software that can help us, this generates our knowledge about the latest threats and specific vulnerabilities
[00:09:16] Camille Morhardt: Okay. So I’m also sensing a little overlap. So I, I have, I’d done another episode on PSIRT–the product security incident response teams–which is kind of all about vulnerability management. It feels like there’s a little bit of crossover here. I had thought threat modeling was all early, early stage design. Um, and then we moved into discovering vulnerabilities, but it sounds like threat modeling continues throughout the life cycle of the product.
[00:09:49] Jonathan Valamehr: Yeah, that’s absolutely right. And it is typically, like you said, Camille, it is more focused on the beginning stages and the design, but a threat models evolve because of things that the piece of your team would find or others. So it’s sort of a big ecosystem that feeds on itself in a loop.
[00:10:05] Dina Treves: When we start a threat model, as you said, it should be in the early stages of the project, but it does evolve and you continue doing it even after you released a product, you revise your threat model. It never ends actually. But when you start, you’re describing your system need to understand what components you haven’t the use cases of this system, because if you don’t understand the use cases, it’s hard to understand how you can abuse them.
I say that when we threat model, we need to go through some sort of a mindset switch. So when we have a product, we design a product, we think with a functional mindset; we want to make sure that our product does what it’s supposed to do. But when we think with a security mindset, we want to make sure that it doesn’t do what it’s not supposed to.
Also when we think with a functional mindset, we consider the use cases. And when we think with a security mindset, we think about the abuse case. So if you don’t know your use cases, you cannot really come up with the abuse cases. So this is the first stage. You need to draw a diagram that shows all the components that you have, the use cases, the dynamic flow in it.
[00:11:26] Camille Morhardt: I’m gonna interrupt you for a second because this, and I, I think maybe I already was trying to hit on this, but this is very interesting to me because especially in hardware, the use case that you designed something for may not be what it’s used for five or 10 years later, because I think the hardware architecture through release cycle can be years and the world changes. I mean, especially in the last couple of decades, it seems like the compute world has changed pretty dramatically and there can be entire new classes of usages, uh, and people using systems for things that we could not ever have anticipated. So how are you dealing with that?
[00:12:09] Dina Treves: If you think not just about the use cases you want to think about what things you don’t want to happen in your system. You can’t accept, for example, when you said, “well, we didn’t work according to the flow, so a certain thing happened.” Well, we can agree that the functionality will not be good because we did not work according to the flow, but we cannot allow security problem because of that.
So when we threat model, we think of all the ways not to use our system the way it’s intended. And we can do it by, um, two approaches, one approach that is commonly used when a threat modeling, they say, think like an attacker; you come up with all kinds of, uh, attacks and try to attack your system.
But I think there’s another way that we should think about and that’s think like a potential victim.. What are you most afraid of? You have your system, you have your use cases and you know how you want to use it. What is the worst thing that can happen to assets in your system? How are you going to protect that it doesn’t happen?
You, you think about all those who will not use the system as it should. You don’t have to have the, the way, the method, how to attack it, but you know, what you’re really, really worried about.
[00:13:42] Camille Morhardt: So that would imply that the architects or the threat modelers are very close with the customers or the end users to really understand what’s the worst thing that happened. Is it a data leak? Is it a personal information leak? Is it a physical stealing of a system? Like what’s, you know, what are you desperately trying to avoid?
[00:14:05] Dina Treves: Yes. Um, when we define our threat model, we usually define, uh, after all the analysis, the security objective. What do we want, actually, what are the objectives that we have regarding security? And some of these objectives come from the customers. They say what’s important to them and we incorporate that into our threat model.
[00:14:27] Camille Morhardt: That’s very interesting. I wonder if we can talk just a little bit about the different kinds of threats that are out there, maybe classes of threats.
[00:14:35] Dina Treves: I like to say that I work for the CIA because as we define threats, we think about confidentiality, integrity and availability. And these are properties, security properties that we want to protect, and there are all kinds of ways to attack them. And we do need to consider where we’re at. Like Johnny mentioned earlier, how the product is deployed and will there be physical access? Is it susceptible to a remote hacking? We need to consider all, all that.
[00:15:11] Jonathan Valamehr: Uh, in general, the broad classes that I like to think about are remotely exploitable attacks, uh, physical attacks and attacks that can happen during design and manufacture of a system. Remotely exploitable attacks, so these are the kinds of attacks that can be launched remotely. So they chose the name well.
[00:15:25] Camille Morhardt: So like at an airport or a coffee shop or something, or you mean just totally different physical location from the computer itself or, um, or server?
[00:15:35] Jonathan Valamehr:: Yeah, totally different physical location. Like I can be a thousand miles away running some code on a server that performed some attack. So that’s something that’s remote. And then the next stage is the physical attacks, which, uh, these, these exploits require physical access or proximity to a device to launch. And this can be very simple from, you know, sticking a USB, stick into a device, or this can be very advanced where actually taking part and you’re shooting lasers at the chip in order to make a fault in some way or error in some way. So those are the class of physical attacks I would say.
And then the last one is the attacks that can be done during design and manufacturer stages. So this is typically by a trusted or authorized that could either say, add malicious code to a design or during manufacturer, they can actually add hardware that does something malicious after six months of being plugged in these sorts of things. And so all these different attack types are ways to perform exploits
[00:16:40 ] Camille Morhardt: Some of them obviously seem a lot easier than others. Um, you mentioned sort of pointing a laser that seems like it would require a pretty sophisticated attacker. Other things might be simpler. You didn’t mention like phishing, but I guess if you’re just getting somebody to click on something and enter a password. Then that’s a pretty easy attack.
Jonathan Valamehr: Yeah, exactly (laughs).
[00:17:02] Dina Treves: Also think about, uh, for example, you, you know, these charging stations that you, they have a USB port and you just stick a USB key. I never use them. How did you know what they’re going to load into your phone or a laptop when you’re trying to charge?
[00:17:22] Camille Morhardt: Well, I have never thought about that. I’m really glad you mentioned it. What about a biometric authentication versus just regular multi-factor authentication, which could include that. But do you guys have an opinion on biometrics, like eyeballs or fingerprints?
[00:17:38] Jonathan Valamehr: Yeah, I think all of these different authentication methods are, uh, they work and they have some error rate. And so that’s something that needs to be factored in the calculation of how secure something is. But I think all of them are very valid. And when you couple different authentication methods together, then you get somebody really robust like password plus fingerprint or a password plus a text to your phone, plus an eyeball scan. I think that’s, that’s where the, the real magic happens when you have all these different pieces going together, because that raises the bar for an attacker. They would have to basically get all that information, which is hard. Whereas a password they may be able to get, because you know, you reuse a password and then some website leaks all the passwords to the internet.
[00:18:25] Camille Morhardt: Right. Do we ever consider people to be a threat?
[00:18:27] Dina Treves: Well, I think that, but we talk about adversary, first we talk about people. Adversaries are people. And we need to think not just about their skills, but also about their motives. Like I mentioned before, when you think like an attacker, this is also modeling people because you need to think what a person would be interested in, what would he go for? How would they do that?
And also think like your customers. What is important to them. Sometimes they may not be aware even of things that, uh, maybe problematic in terms of security that they did not give you as a security, um, requirements from them. But you should think what is important to them and then you may come up with more security, uh, issues. Are important to them.
[00:19:25] Camille Morhardt: In your training Dina is there sort of one thing in all the trainings you do, if you people could walk out with one takeaway, what would that be?
[00:19:32] Dina Treves: Security should be part of everybody’s job and everything that you do you need to think, is there security impact to what you’re doing, even if it doesn’t seem like that in the beginning?
[00:19:47] Camille Morhardt: Okay. And Johnny to have a little bit of fun here, let’s say you, uh, you got in an elevator and it was full of CEOs from fortune 500 companies and they just looked at you and said, what should we know about threat modeling? What would you tell them?
[00:20:03] Jonathan Valamehr: I would say that threat modeling needs to be done early and it needs to be iterative. And as Dina said, everyone should know about the threat models so that every design decision and every trade-off that’s made is understood with the security implications that are gonna be affected.
[00:20:21] Camille Morhardt: Well, thank you, both, uh, Johnny and Dina for joining me today on What That Means. And for our listeners, you could take a listen to a couple of other episodes that we did that kind of intersect with some of the themes of this episode, I’m thinking of the Human Factors episode, and also the PSIRT episode. Good to have everybody here today.
Jonathan Valamehr: Yeah, thank you so much, Camille.
[00:19:56] Dina Treves: Thank you. It was a pleasure.