[00:00:00] Announcer: Welcome to the Cyber Security Inside Podcast. Cyber security is not just for coders and hackers, CISOs and their executive peers need to think about cyber security differently. In the Cyber Security Inside podcast, we discuss relevant topics in clear, easy to understand language. Now, here is Tom Garrison.
[00:00:30] Tom G: Hello, and welcome to the Cyber Security Inside podcast. I’d like to introduce my co-host Camille more heart. So, hi, Camille, how are you doing today?
Camille: I’m doing well today, Tom, where are you these days?
Tom G: I am today- I’m in central Oregon. I took the family who can work anywhere now, so I’m actually working from here. But, uh, yeah, the rest of the family was having a good time. I was sitting in the dining room in a house that we rented and doing work, so.
Camille: And now the tables are [00:01:00] turned cause you’re having more fun than they are. Right? (laughs)
Tom G: Exactly. (laughs) Who could have more fun than me doing a podcast. So what, um, what kind of interesting topics do you have for us today?
Camille: Well, I have a rather simple topic, but as with many simple topics, they start that way and you realize they’re very, very complicated. I want to think about bringing a product to market and everything that, that entails from the perspective of security.
Tom G: Okay. [00:01:30] Yeah. There’s a, there’s certainly a lot involved there. Interesting. So, yeah, we, we definitely can, I, I can’t tell you how many times this has come up for, for me, where people come ask about security specifically, And, you know, they see the end product and I get the question of, you know, What went into this? How did you come up with us? And so, yeah, that’s interesting.
A lot of people don’t think about that. What does it take? Not just to get a product out the first time, but to actually support it and [00:02:00] keep it, keep it alive and keep it safe over its life.
Camille: Yeah, and it really balloons- the topic balloons as you start to think about it. So I just maybe just to take that the path and follow it along and see all of the different areas that are touched and all the different kinds of people, again, from the security perspective that, that it takes to do.
Tom G: Yeah. I’m envisioning when I was a kid, Saturday morning cartoons. And there was Schoolhouse [00:02:30] Rock. And there was that cartoon about how a bill becomes a law. And maybe that’s the approach we take care of. We start at the very beginning and we go from the very beginning, all the way through to sort of a, product’s a natural end of its life cycle.
Camille: Oddly, Tom, I actually think that would be fun. (laughs)
Tom G: Yeah, actually, that’s it. So that’s the podcast for today. Let’s do that. And I think interestingly enough, for this one, I don’t think we need a guest. I think you and I can probably do this [00:03:00] and hopefully the listeners here learn something, not just about how Intel does something. It’s really more about the types of things you need to consider when developing a product and supporting a product.
Camille: Yeah. And I think a lot of the things that come up as we walk through this. Exactly tie to the conversations that we have with guests are piece parts of that entire whole.
Tom G: Yeah. So we’ll stitch it together. All right. That’s our podcast for today. Let’s take it away.
[00:03:30] Camille: So what’s the first thing that happens? Actually, when I thought about this a little bit, I realized this actually pretty hard question. What is the very first thing that happens before there is a product?
Tom G: Well, I mean, I think at that point, let’s just start by saying architecturally, we have to know what is the product that we’re trying to build and what is it supposed to do? That sounds relatively easy, you know, sort of [00:04:00] straightforward, but it becomes much more complicated when you start thinking about, well, how is it going to be used? And that may be different than how it was designed to be used. And that opens up a whole new can of, uh, or a whole new opportunity, really for a set of unintended consequences from a security standpoint.
Camille: Right. I think that’s a good point. And so from the security perspective, as we come into this, [00:04:30] one of the first things that needs to happen is threat modeling, where you essentially decide what’s in and what’s out of scope. Um, you could threat model everything from environmental factors and natural disasters to global geopolitics; or you could narrow it down to something that could directly affect, um, your network or access to systems.
Tom: That’s right. And so when we [00:05:00] say “threat modeling,” what, what we mean is really thinking about, uh, what do we want to try to guard against. A simple example of that might be that you design a widget and you want to make sure that that widget works so that no bad guy can get access to the widget through the network. Right. And so that’s, that’s the threat model that you are worried about. You’re worried about network security, but [00:05:30] you don’t design the product, maybe, for physical access. Meaning if the person can go grab your widget and bring it to their lab and start hooking it up with probes and everything else, well, that’s called physical access and maybe you designed it to be safe against physical access, or maybe you didn’t, but those are threat models and there are lots and lots of them.
Camille: Okay. So companies can have teams of people who specialize in threat modeling, right? This is not just something [00:06:00] that any architect in the business might be aware of. These are pretty highly specially trained people, I think?
Tom G: They are. Absolutely. And in fact, there’s whole areas of research that are focused on new threat models. They come up all the time–new ways to sort of attack something. I mentioned a very, very simple one, like network security versus physical access. Uh, you know, there are other threat models around things like side channels.
Can you [00:06:30] find a new way that hasn’t been thought of before to attack devices. And that takes real brain power, you know, real people that focus on that kind of, areas of research. And when they’re successful, it opens up sort of a whole new realm that now people have to figure out. “Okay, do I care about that threat or, you know, maybe I don’t, but in the cases where I do now, I’ve got all of these new things I have to guard against.”
Camille: Um, the next thing that happens [00:07:00] is the design and the actual development–which can be coding, or it can be physical or mechanical putting together of things. In that context, what are we looking at from a security perspective?
Tom G: Well, there, what we want to make sure is that. We have, um, processes that will catch known security threats that exist. So we call that SDL, [00:07:30] um, and that stands for a Secure Development Lifecycle. The, the idea for SDL is you want to make sure that your processes are robust, so that each individual sort of designer or validator or whatever, doesn’t need to know everything about every threat that’s ever sort of existed in the past or currently; that your processes are robust enough to where every time you learn about something new, a new kind of attack, a [00:08:00] new sort of vulnerability, whatever,you bake those checks into your SDL process so that you get the benefit of sort of standing on the shoulders of everyone that’s gone before you and learn what they’ve learned. That’s built into your process so that you don’t repeat the mistakes you’ve made before.
Camille: So you’re providing a set of processes that maybe fall under this umbrella term called secure development life cycle. But as a product is being [00:08:30] designed and then code is being written, let’s say for it, uh, there’s a breakdown or subsets of that process throughout. And that might include tools that people can use to run checks against their code to make sure that they’ve not somehow done something inadvertently that leaves a security risk? and other kinds of services, I guess, uh, for, within your, your company.
Tom G: Okay. That’s right. So the process of SDL is super important, but you can imagine over [00:09:00] time, the number of checks becomes a massive undertaking, right? You want to make sure that you haven’t. Inadvertently opened up an opportunity that you already learned about before from a security standpoint.
And so automation becomes key. So we want to have tools that have embedded in them, all of these checks. And, uh, you want that to be available to your designers so that they can just run a simple check. It comes back and says, “Oh, there’s these three things you need to [00:09:30] check” and fix those.
Camille: So what is that problematic if your company is using tool A, B and C, and then, uh, the bad guys out there realize those are the tools you’re using and those are the things you’re checking for? In other words, a standardization of some of these tools and processes, actually a problem?
Tom G: I wouldn’t say it’s a problem. It’s a fact, it’s just a fact in the industry that this is sort of a spy versus spy or, you know, cat mouse thing where, uh, the job is [00:10:00] never done. When the hole gets filled by the industry, then the attackers go find the next hole and, and our job by the way within the industry is that we want to be actually ahead of the attackers. So that’s not to say that it’s built into our tools. We want our researchers–that are out on the forefront even before somebody figured out how to quote unquote, attack a machine–we want either internal researchers or the external community of researchers that are working with us [00:10:30] to be out there on the, on the forefront to be able to find these issues and vulnerabilities come work with us. And then we fix those before an attacker actually makes use of them.
Camille: Okay. So now we’re talking about maybe even the application of things like red teams, where we’ve got a product and we’ve designed it. And by we, I mean the industry. You know, prior to release, we want to go and attack that and see if we can find something ourselves or through a partnership with [00:11:00] somebody who’s going to let us know what it is. So we’ve got time to fix it before it actually goes live.
Tom: That’s right. And that’s a real investment. And I think more companies are now becoming aware that this is not something that you can just rely on external researchers to do the work for you. You know, one example for us, you know, at Intel, we have about, uh, we have over 200 researchers in internally within Intel. So they are our, our own, [00:11:30] uh, Red Team of researchers. So they’re, they’re finding issues and, and working hard every day to do so.
When we look at, uh, all of the issues that we are made aware of, and we’re towards the end of 2020, right now, the percent of issues we find internally versus the issues we find externally, we’re finding more than 95% of all the issues internally. Things that we want to fix security vulnerabilities that we need to [00:12:00] fix or want to fix. And so the reason that’s important is that imagine if you are a company in whatever industry and you’re thinking, “well, I can just rely on external researchers.
That means if we had done the same thing that, uh, less than 5% of the issues that we know of today for that were found this year, we would have been made aware of. And that opens up the opportunity to not be ahead [00:12:30] of the bad guys and the bad guys could actually get ahead of the research and actually attack your products. So that’s why it’s so important to invest internally.
Camille: So the other thing though that, uh, some companies do is they try to partner externally as well. Um, and that I guess would be post-release. But in, in that case, you would have, what’s known as a Bug Bounty program where you create an incentive for people who discover a vulnerability [00:13:00] to actually tell you about it and work with you. And you actually compensate them for that. How are those amounts decided?
Tom G: Uh, each company has their own. You know, certain companies, they pay a tremendous amount of money because they have really. Harvested all of the sort of obvious, uh, bugs. And so the harder of the bugs are to find the more you have to pay people to work on them because there’s so few of them, you better make it financially lucrative for them.
You know, I know [00:13:30] our company has a Bug Bounty program. It’s, it’s a great. Tool to have with regards to sort of engaging with the external research community, which is, which is hugely valuable to do. Because there are some really, really smart people out there and you want them working on your platform because they are making your platforms better.
And so for us, it’s a tremendous gift. It’s a, it’s a very straightforward investment for us to make, but I think really for any company, if you look at within the tech [00:14:00] industry in particular, there’s a lot of us that have a bounty program, not all. Um, and that sort of begs the question “Why not everybody?” But certainly, we’re pretty big believers in the Bug Bounty program and we’ve had tremendous results as a result of it.
Camille: So I want to think of next about, let’s say we, the product gets launched. I, again, I say we, I don’t necessarily mean Intel, but we’ve launched a product. And I, I do want to just hit on this a little bit, because I know you’ve done a [00:14:30] significant amount of work with and in sales before. And so I’m curious about that perspective.
I think one of the guests we had on recently pointed out that sometimes security or an update isn’t necessarily a feature, not really a selling point. It’s an, it’s a “need to have not a nice to have.” And how do you help sales help you when it comes to talking about not just a security feature, which may be something that you [00:15:00] sell, but kind of this whole product assurance process and all of these things that, that a company may be doing?
Tom G: This one is something we don’t have completely figured out, but you are exactly right. And we did have, uh, earlier guests, we talked a little bit about this. Security is one of those topics. It’s very intimidating for people, especially people that aren’t deep insecurity. And so as a feature, what ends up happening is it sort of. [00:15:30] Uh, gets lumped in with just a bunch of other stuff in a, in a product that people don’t really understand. And I think, “Oh, okay, well, it’s good. You know, I’m glad that there’s a security feature. I don’t really understand how it works, but I’m happy that I have a feature.”
The basic security they just assume that that will, of course, if you know, if you’re building a. product and your reputable company, of course your product is secure. So there’s a sort of base level assumption, as well. Where we are embarking on as an industry is to point out that [00:16:00] not every company does security, even the basics of how do they support their products. There’s no real unanimity across the industry in terms of how to do that. And those are things that customers really care about.
Like how does a product get supported if there is an issue, does the company take it seriously? Do they fix the issue in a timely fashion? How long does it take? Do they, you know, do they really manage that product, uh, in a way that customers [00:16:30] would expect. And that is something that I, I think, and certainly what I’m trying to drive across the industry is let’s be front footed about it. Let’s figure out ways to be able to communicate that to a customer because that’s something that customer actually cares about.
Camille: Right. They may or may not value the-or be willing to pay in the sort of MBA term, but for the security feature that you’ve added, or it might not be relevant for all of the audience. It may be less relevant for [00:17:00] consumer than say for an IT shop. Um, but making sure that there’s internal red teams and Secure Development Lifecycle, a robust process there, and an ability to, uh, threat model with leading researchers and those kinds of basics underlying the entire thing, um, are critical to everybody. Even, even when they’re not thinking about security at all.
Tom G: No, that’s right. And, and I think, you know, just, you know, a simple explanation here, if you, if you think [00:17:30] about a hypothetical vulnerability where a researcher comes to us, there is now known to be a vulnerability with our product. So what happens? First thing is we have to figure out, is it really a problem? A lot of times researchers just make mistakes, uh, or they, they trip onto an already known vulnerability. And so we’re what we need to do is figure out is this new? Is it actually an issue or not?
And once we figure out that it is, so [00:18:00] we, we, we put a team on that right away to figure out, is it really an issue? Then we work with the finder of the issue to figure out how we’re going to communicate this. Are they going to be part, for example, a part of our Bug Bounty Program, and are they going to adhere to the coordinated vulnerability disclosure? So meaning you don’t go talk about this until the fixed exists, because you don’t want to create a weapon for the industry.
And, you know, in almost all cases, the researcher will absolutely work with us in an [00:18:30] ethical way and, and will then begin the work of understanding the issue and then putting a whole team across the company with all different disciplines to go tackle that issue.
Now at our company, we call the most serious issues, we call the, the team that gets assembled a PRT–Product Response Team. And it’s the highest level of response that we have within the company. And we only do that for the most [00:19:00] significant issues. And those are things that would pose a threat to our company or our customers.
Camille: So, well, let me just back up because the vulnerability that’s being disclosed, what team evaluates that or does the triage for that?
Tom G: So initially it’s the, the PSIRT team–Product Security Incident Response Team. That team will be the one that first makes that initial contact with the [00:19:30] researcher and then initial evaluation. And then very quickly we will triage the issue with experts from whatever part of the platform we’re talking about. And I, I don’t want to be too specific to Intel, but just in general, you’ve got to figure out, well, what part of the device is in question now for this vulnerability? and whoever that is, let’s get the experts for that part of the platform, get them in a room and start working through this.
And that takes priority over every other [00:20:00] type of activity internally, when this siren goes off, if you will, people instantly drop whatever they’re working on and they go straight to fix this issue and, and figure out what the, what the problem is.
Camille: So you won’t have a dedicated team that does PRTs because depends what comes in. And so people have to be assembled to create a- you might have a dedicated sort of program manager type of a person, but aside from that, it really [00:20:30] depends on what the product is.
Tom G: That’s right. And, and, you know, at Intel, uh, for client, those people that sort of run the process and know how to do it, those are in my group as well as the researchers are in my group. So, but, but when we do PRTs, you know, we will assemble a team of, you know, designers and architects and whatever for our products that come from all over the company. And, uh, and like I said, they drop whatever they’re working on and they, this [00:21:00] becomes priority #1.
Um, but then we work to find the issue and this, or sorry, find a mitigation or a fix to whatever the, uh, discovered security issue is. And that takes, in some cases it’s relatively straightforward. It takes maybe a few days or a week. And in other cases it can take months. To, to find issues. Some of these are very, very difficult issues to resolve.
Now we jumped to, okay, now [00:21:30] we have a resolution now, what are you doing? Well, it’s not good enough just to have a resolution. We have to make sure that the fix for this issue doesn’t cause a problem somewhere else.
Camille: Internally and externally, right? Cause you’re going to roll it out in some customer environment and, you know, take down somebody is infrastructure by mistake. If you don’t check that carefully.
Tom G: You got it. So we have, we start first, internally, and we do a whole series of what we first called “no harm testing.” And then we do later, [00:22:00] we do a more robust set of testing with our partners, with our OEM partners. And so we have to coordinate all that activity. So up until that point, we haven’t even told the OEMs. So we have to keep things secret because the more you start talking about things, any of these vulnerabilities, when they get out, if they don’t have a fix to them, they are actually weapons for bad guys.
And so we have to coordinate our communication to our OEMs at the right time to [00:22:30] let them know about this new, new issue. And then we engage with our teams and usually we then have a mitigation that we ask them to start validating with us.
Let’s say, you know, we find issues, whatever we go back and forth until finally we sorted everything out. Then we roll that fix together for this particular issue–the hypothetical issue that we’re talking through here. We don’t want to just dribble that out. And then a week later have another one and two weeks later have another one because it [00:23:00] becomes in consumable by the industry, by our OEM partners, but also end customers.
And so instead, what we did was we created a way to pool a lot of these vulnerabilities and functional updates and security updates all together. And we go out either twice or three times a year, and that becomes something that makes it easier for the OEMs to test. And it also makes it easier for end customers to know “two or three times a [00:23:30] year Intel’s going to come out with these updates. I just know that I need to run a test in my environment and get those pushed out too. Uh, platforms that are out in the in the, in the wild.
Camille: And then if something’s really extreme or urgent, obviously you have to break the cycle, but otherwise we try to package it? Okay. That makes sense.
Tom G: And, and just think about it this way. There’s fatigue in the world, right? There’s fatigue about, “Oh my gosh, you know, I’ve got another, another set of these things to do.” And so what we, what we need to do and we try to do and aim [00:24:00] to do is to be predictable. And I think that for any company in any industry, you’re thinking through this, you got to think about your end customers and make sure that, that you bring them through a process that they can, they know what they’re signing up to. They know that, “okay, two or three times a year, I need to update the product” and, and, you know, get those fixes out because it’s a company that has the fatigue that doesn’t, uh, install patches those are the companies that are looked to be exploited by the bad guys.
Because an almost every case in almost every case companies that are hacked are using very, very old attacks, uh, that are very well-known. They know that the industry knows how to solve those, but for whatever reason, the end customer never updated their machines. And so that’s what we, as an industry, certainly me within our client group at Intel, this is one of the bigger problems that we’re trying to wrestle with, which is “how do we make this consumable and, [00:25:00] and easy if that’s the right word for customers to know what to expect and to plan for it and it just operates like clockwork?” That’s what we’re striving to.
Camille: So the other two things that have to happen are kind of internally, once you’ve coordinated, the disclosure and released an update for people to adopt is you’ve got to make sure that those learnings that you had internally get recycled back into that Secure Development Lifecycle so that [00:25:30] people who are starting new products are now aware of this new discovery that you’ve made and, and are incorporating those learnings into the build.
Tom G: That’s right. And so that’s part of that’s factored into the SDL process and it’s also, you know, we didn’t mention it before, but when, when there is a new discovery, one of the things that that Product Response Team does is it says, “Hey, what are some of the upcoming products that are right around the corner that are just about to be launched? Is there a way for us to [00:26:00] intercept those.?And make whatever changes we need to right away in those products so that we’re not creating even more platforms that go out into the wild that we have to fix later.
Camille: I feel like we’ve just gotten started. This is just barely scratching the surface here. (laughs)
Tom G: Yeah. We’re just sort of talking at the high level obviously, but, um, I would say, you know, the takeaway is you have to think as a company, in any industry, you have to think holistically about your product. And you have to think that through the security implications [00:26:30] all the way from when you’re initially designing a product to, you know, it’s out in the wild and customers are actually using it. And it takes real investment along the way to do that.
Tom: I know in, uh, you know, the last several of our podcasts, we’ve wanted to end on something fun. Like, what did you learn? And, uh, I’ve got one that I just dying to share.
Camille: It’s going to be weird. Isn’t it?
Tom G: It is kind of weird. [00:27:00] So this is in the world of canines. Did you know that dogs normally start sniffing with their right nostril, then keep it there, if a smell could signal danger, but they’ll shift to the left side or something pleasant like food or a mating partner. Did you know that?
Camille: I did not know that
Tom G: I did not either. I, I came across this and I’m like, that is the coolest little factoid. I think I’ve ever heard about dogs. I know [00:27:30] that their sense of smell is crazy good compared to humans, but I didn’t realize that they could like use one side of the nose and versus another, and I thought it was fascinating.
Camille: So my fun fact that you’ve triggered about animals and nostrils is there’s a set of camels at the end of my street. And any kind of organic matter from the vegetable garden, you can bring to the camels if you want. And they’ll eat it.
I had picked a [00:28:00] bunch of stuff when I was clearing out the garden and I put in a bucket overnight and I brought it the next day and I knew it wasn’t quite as fresh as what the camels were used to. And I dumped the bucket out and the camel like started lowering his head toward the ground. And then he immediately shut both of his nostrils, like zipped them up into two little lines, and then he got really low and kind of nuzzled his mouth and nose in amongst the material.
And then he slowly opened [00:28:30] one of the nostrils and took this inhale and then he zipped it up again and then he moved as his, those around and then slowly opened the other nostril and inhaled. So, and then he did eat it, but it was, I thought that was funny. I’m sure there’s a sandstorm use-case there somewhere too, but
Tom G: yeah, but it shows how finicky they, uh, they actually are. It’s a defense mechanism, but uh, cool stuff.
Camille: Okay. I have to add, I’ve got to add one thing, sorry, but this fun fact. Okay. That wasn’t a fact that [00:29:00] was an experience. I’d remember that you can’t trigger a young inter species.
Camille: If you’re sitting around a group of people and somebody yawns, everybody has to yawn. But if your dog yawns. You won’t get triggered.
Tom G: That is cool. All right. You know, you learn something. That’s what this podcast all about. You got to learn interesting stuff. (Camille laughs). All right. Well, Hey, thanks, Camille. I think today was certainly a departure from what we normally do, but, uh, I thought it was fun. hopefully the listeners found it useful as [00:29:30] well. And they can apply it to their, their companies and their industries, as well.
Subscribe and stay tuned for the next episode of cyber security inside. Follow @TomMGarrison on Twitter to continue the conversation. Thank you for listening.