Skip to content
InTechnology Podcast

#64 – What That Means with Camille: Risk Mitigation and Vulnerability Disclosures

There are infinite vulnerabilities out there that make us susceptible to instances of cyberattack, and as of this year, we’re on track to have identified 20,000 of them. While there’s a whole risk mitigation ecosystem in place, CVE (formerly known as the Common Vulnerabilities and Exposures Program) has played a huge role in establishing a dictionary-esque database with IDs and definitions for each known vulnerability.

On this episode of What That Means, Camille is joined by returning guest Katie Trimble-Noble (Intel – Director, PSIRT & Bug Bounty) to describe the critical nature of CVE in greater detail.

 

They cover:

  • The origins and evolution of CVE (formerly known as the Common Vulnerabilities and Exposures Program)
  • Why CVE matters, and what it does and doesn’t do
  • How NVD (the National Vulnerability Database) and CVSS (the Common Vulnerability Scoring System) differ from and apply to CVE
  • How risk severity is actually scored
  • Who and what CVE Naming Authorities (CNA) are, why they’re important, and the process of becoming one

… and more.  Really interesting stuff, so tune in!

 

*And if you like what you hear, catch an earlier conversation Camille had with Katie in WTM Episode 26: Bug Bounty and Crowdsourced Security; Alexander (RoRo) Romero joins them for a great discussion, and you don’t want to miss it: https://bit.ly/3mv9yVr

 

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

 

Here are some key takeaways:

  • CVE makes up an important part of the mitigation ecosystem, and its main mission is to catalog and identify known vulnerabilities; we can think of it as a sort of dictionary in that it tells you the definitions of vulnerabilities.
  • Although CVE does not expand on the severity of vulnerabilities, it does list which ones are in your network; NVD and CVSS help to paint a clearer picture of risk level.
  • While ideally everything would be patched, there has to be a hierarchy of priority; that’s what makes CVE so crucial, because it enables system admins to differentiate and decide what to patch first based on risk analysis.
  • CVE also helps to identify vulnerabilities in a universally recognizable way.
  • Some vulnerabilities can intersect to form an attack chain, which is a common phenomenon that’s often referred to as a “daisy chain.”
  • CNAs are vendors, government agencies and research organizations that have a deep knowledge of vulnerabilities because they own a product or have done extensive research on it; these CNAs can publish directly to the CVE Master List.
  • There are currently 161 CNAs around the world, one of which is Intel.
  • In 2021, 20,000 vulnerabilities are on track to be identified to date.
  • There is no cookie cutter response to risk, because the things that get fixed and in what order are dependent upon implementation.
  • It’s important for consumers to put pressure on manufacturers to be transparent about vulnerabilities, because in the end, it strengthens the entire ecosystem.

 

Some interesting quotes from today’s episode:

“Everyone uses CVE. And the reason that you use CVE is when you’re doing your risk analysis to patch management, your system admins need to know what are we vulnerable to so that they can make that risk-based decision of what gets patched first.”

 

“Really risk is in the eye of the beholder. I can’t say what’s more important for you to patch because you have certain mitigating compensating controls on your end, the implementation end of the user. The implementation really dictates how things get fixed in what order they get fixed.”

 

“It’s not the mission of the CVE program to really get into some of those kind of theoretical details. It’s more sticking to the mission of the CVE program to identify and catalog those vulnerabilities so that you can enable the user end with the best risk-based program that can be available. It’s all about transparency and truth.”

 

“There was a lot of back and forth about what exactly is an exposure. So ultimately it was decided that in the best interest of the community, it was better to focus on CVEs in the form of vulnerability identification.”

 

“The CVE Master List is really just a reflection of the known vulnerabilities; there are an infinite number of vulnerabilities out there.”

 

“I mean, my Fitbit could have vulnerabilities and that’s not something you saw 10 years ago.”

 

“I think that we’re going to continue to see a rapid increase in the quantity of vulnerabilities that have been identified. And that’s why it’s so important to have that community based approach, those CNAs, those people who are sitting there cataloging vulnerabilities in their systems.”

 

“As the consumer, you want to put pressure on your product manufacturer to build a secure product.”

 

“If you can attack that insulin pump and you can cause an insulin pump to dump all the insulin in one minute, you can kill a person. That is a frightening vulnerability and those kinds of real-world sort of impacts they’re not theoretical anymore. They’re very real today.”

 

“When you disclose vulnerabilities, you make the overall ecosystem stronger and better and smarter.”

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

WTM: Risk Mitigation and Vulnerability Disclosures

[00:00:00] Camille Morhardt: Hi, and welcome to this episode of What That Means. Today we’re going to be talking about risk management and specifically vulnerability disclosures. We’ve got on with us Katie Noble, who we had previously on an audio-only podcast speaking with another colleague RoRo about crowdsourced security and bug bounty. She runs Bug Bounty programs.
Welcome to the show today, Katie. It’s nice to see you this time.

[00:00:27] Katie Noble: Yeah, it’s really great. Thanks for having me.

[00:00:30] Camille Morhardt: I love your t-shirt. That’s fantastic.

[00:00:30] Katie Noble: Thank you. “I love hacker’s” t-shirt I feel like I should show some love to the community.

[00:00:37] Camille Morhardt: Can I ask, are you a hacker? Is this like a narcissist t-shirt or do you just love hackers and you’re not one?

[00:00:43] Katie Noble: I am not one. My background is actually in human behavior analysis and religions. I work with hackers and I am a big fan, but I would say that I am a champion versus a doer (laughs).

[00:00:57] Camille Morhardt: So human behavior, that’s interesting because I know that in a past life, prior to Intel, you were an analyst in the Air Force for about 15 years. Is that right?

[00:01:08] Katie Noble: Yeah, that’s right. So I was active duty Air Force for many years. And then I went into the Air Force Reserves. I was an intelligence analyst–Operational Intelligence Analyst is what it’s called. I did that for 12 years. And then I transitioned over to the U S government where I was an intelligence analyst in the US government and various government agencies. And then ultimately at Homeland Security where I ran the vulnerability disclosure programs for several years before coming to Intel.

[00:01:33] Camille Morhardt: We wanted to talk today about Common vulnerability enumerations, but you just corrected me and said, now we only use the acronym. So tell me about what this is.

[00:01:45] Katie Noble: Full disclosure, I sit on the CVE Board of Directors, but I’ve been very involved in this program for several years, starting when I was at Homeland security and it is very near and dear to my heart. And so I absolutely love all things CVE. So the CVE program started in around 1999 and it was called the Common Vulnerabilities and Exposures Program. And Common Vulnerabilities and Exposures, the idea was to catalog vulnerabilities, but also this thing called exposures. We really stopped doing that. It’s many, many years ago. We on the board, we made the decision that we would simply call it the CVE program going forward. And that’s just what it is CVE.

[00:02:58] Camille Morhardt: I don’t know that many people have heard of CVE. How did it originate? And I think we just want to clarify, you’re talking about this. When you say we, you mean like the industry as a whole, this is all tech.
What happens when anybody finds out that there’s some kind of a vulnerability or some kind of an exposure, how does the community make sure that everybody else knows what it is and you’re all talking about the same thing? And it’s transparent so that I assume people can incorporate that into design as they’re moving forward or mitigations.

[00:03:40] Katie Noble: So, yeah, it’s absolutely the entire ecosystem. So we have technology vendors in that we have technology users; there’s lots of different players in the cyber security defense space–all the way from mom and pop’s bakery to very advanced financial organizations. Everyone uses CVE. And the reason that you use CVE is when you’re doing your risk analysis to patch management, your system admins need to know what are we vulnerable to so that they can make that risk-based decision of what gets patched first. While we always say like, “we’d love everything to be patched,” unfortunately a lot of times there’s has to be some level of differentiation. And so there are a lot of automated scanning tools that organizations will use to say, “this is where my vulnerabilities are within my own assets.” And then they use that information to enable their risk management system, ultimately deciding how, what gets patched and when it gets patched.
And so CVEs really help us differentiate. And that’s the output of a lot of these scanners is a list of CVEs. It’s not telling you the severity of the vulnerability. It’s just telling you that the vulnerability exists in your network. So that’s like the end user is using a CVE.
Now, then on the other side, you have technology vendors who make products and we identify the vulnerabilities in our products through CVE. So CVE is an identifier number and what it does is it lets you and I know that we’re talking about the same vulnerability. Imagine there’s a block of code in a product and there’s three different vulnerabilities in it. And we need to know that I’m talking about this one and not that one or this one. The CVE ID–which usually comes in the form of the four-year number and then, uh, uh, another set of numbers, so CVE 2019 07 8–that’s a CVE number. If you find that in the CVE database, it gives you a big description of what that vulnerability actually is. So that way we all know that we’re talking about the same thing.

[00:05:33] Camille Morhardt: So I find I have a hundred now within some subset of my systems, am I doing one update per CVE? Are they packaged together? Are they from different vendors? So I have to go different places to get the updates. How do I know which ones even have updates? Do they all have updates?

[00:05:50] Katie Noble: You have hit the nail on the head with, I think, the entire. crux of every patch management program, every mitigation program that exists. There is this huge ecosystem of trying to figure out what needs to be done. And CVE is one part of it. So CVE, like I said, does not provide you severity. So it simply says “this is a list of all of the things” and you, and there are these off the shelf scanners that exist that they pull in the CVE database on the weekly or monthly basis daily basis. And they say, “these are all of the vulnerabilities that have been identified in this database and I’m going to run that by your assets and see what hits.” And then from there, it outputs a list and usually a Sys admin will go through that and determine risk and determine the patch management cycle.
So really risk is in the eye of the beholder. I can’t say what’s more important for you to patch because you have certain mitigating compensating controls on your end, the implementation end of user. The implementation really dictates how things get fixed in what order they get fixed.

[00:06:54] Camille Morhardt: In the description of the vulnerability, I assume the CVE catalogs or describes what the vulnerability is?

[00:07:02] Katie Noble: Well, we say CVE is the identification. It’s kind of like a dictionary. It tells you the definition of the vulnerability. What it doesn’t provide you is any other enriching information. Now there’s a complimentary database that exists called the NVD, the National Vulnerability Database, and the NVD is where you get this other term that we’re going to throw in there for their fun acronym next called CVSS. So CVSS is the Common Vulnerability Scoring System. That is where you get that “high committed,” “medium,” “critical,” “low” score for the vulnerability itself. So the CDE database ultimately flows into the NVD and the NVD is what we would consider the encyclopedia. So that’s where you get your extra enrichment information. So there are several analysts who sit in Gaithersburg, Maryland at NIST, and they run the NVD, the National Vulnerability Database. And so they apply the CVSS score, any enriching information and that again, also flows into those scanners that would be used at an end-user or an organization enterprise level to decide, you know, your asset management and some of that risk activity.

[00:08:06] Camille Morhardt: Now whose opinion is it, whether it’s a “high” or a “low” criticality? Is that the two people at NIST in Maryland or is that the manufacturer whose system originated the vulnerability?

[00:08:19] Katie Noble: Yeah. So it’s a little bit of both. It used to be up–until fairly recently–used to be almost entirely the team sitting at NIST. So the official scores come from NIST. And as the non-biased player in this ecosystem, it’s kind of appropriate that the official score would come from NIST. But recently the CVSS score, whenever a CVE is created by the vendor, which is called a CNA CVE Naming Authority.
So when a CVE Naming Authority–there are 161 of them right now–when they create the record and push it into the CVE Master List, there is a little field there where they can add the severity and they can say “based on my experience, because I own this product, I built this product, I lived this product. I think that this is this level of severity.” So that flows then into the CVE master list, which then goes on into NIST. And then when NIST sees that record and they go through to do their analysis to say, this is what level it falls in, they take into consideration what the CNA had said the overall CVSS score is.

[00:09:22] Camille Morhardt: Are some of these things linked together. Like, do you get a vulnerability that’s kind of has a cascading effect or intersects with other vulnerabilities that you need to make people aware of?

[00:09:33] Katie Noble: That’s a very regular thing. We call those daisy chained or chain together vulnerabilities, and that’s really how you get an attack chain. So that does happen. And it is very common in exploits to use kind of multiple escalating levels of vulnerabilities in order to execute an exploit. But the CVE program tries to keep things as close to the mission as possible. So the mission of the CVE program remember is to catalog and identify known vulnerabilities.
So getting into that, like how this vulnerability is used, or does this say something about the quality of the product, they try to stay away from all of that. It’s not the mission of the CVE program to really get into some of those kind of theoretical details. It’s more sticking to the mission of the CVE program is to identify and catalog those vulnerabilities so that you can enable the user end, the best risk-based program that can be available. It’s all about transparency and truth.

[00:10:27] Camille Morhardt: You’re trying not to be too subjective about it. You’re trying to provide the data and then allow the end user depending on what they’re worried about or what their use cases are, what their issues are to make the final call.

[00:10:41] Katie Noble: Yeah, exactly. So risk is really in the eye of the beholder. It’s hard for me as a product vendor to tell you as a user, what your severity is or what your impact is. I can tell you the severity of the vulnerability in a vacuum, but I don’t know what’s been done to that asset. So I can’t tell you how it may impact you.

[00:11:03] Camille Morhardt: You had said earlier in this conversation that we, as in the industry had stopped doing this for a while. So, and I, I think I have some kind of a memory that you yourself were kind of a part of bringing this back in a major way. Can you talk about that gap and your role in resurrecting it?
[00:11:21] Katie Noble: So we’ve always done vulnerability identification. The CDE program has always done vulnerability identification, but very soon after the creation of the CVE program–so the CD program was launched in 1999–so we’re going on 22 years. My program can now drink, which is just delightful. Very soon after the launch of the program, we realized that exposures were something that we’re just a little bit weird and it was really difficult to tap down what exactly is an exposure. So for instance, if your webcam is not secured and I can find your webcam on a ?showdown poll?, is that an exposure? Does that get a unique tag? And then how does that live in a master list? If you were to add some protection to your webcam, do I need to now go take that exposure out cause it’s closed?
So there was a lot of back and forth about what exactly is an exposure. So ultimately it was decided that in the best interest of the community, it was better to focus on CVEs in the form of vulnerability identification. So that was the decision that was made. Now, I think my role in some of this is for a while, I worked at the Department of Homeland Security where I ran the vulnerability management portfolios and MITRE CVE was one of them.
So it was miter CVE program, the NIST NVD and BD program. And then a couple other portfolios. So I took that over in 2018 and right around that time, starting in 2016, there was a huge change in the way that the CVE program runs. So previous to 2016, all vulnerabilities had to be approved through the CVE board. So you can imagine all of these vulnerabilities flowing in every year and a group of 10 people sitting there and saying “yes” or “no” to each one. And it got very overwhelming. And so the idea was “we’re going to change this and make this more community based.” And so they brought on CNAs–CVE Naming Authorities and CVE Naming Authorities are vendors, they’re research organizations, and government agencies. They‘re people who really know about that vulnerability either because they own the product or because they’ve done extensive research on the product. And so they can, as of 2017, CNA could publish directly to the Master List. So that eliminated that choke point. It was just eliminated because now the CVEs could go directly be created by the CNA.
So Intel is a CNA, there’s 161 CNAs all around the world. It’s a multinational program, all time zones and languages. And if that vulnerability falls within the scope of your product and you are a CNA, you can publish directly to the master list. So I’ll give you a statistic on that. So in 2016, there were 6,457 vulnerabilities identified. In 2017, there were 14,644, more than double. And that change is reflected because the CVE program said, “we’re not keeping up with the times. We need to grow and evolve, and we need to be able to identify more vulnerabilities faster so that our end users can have a better baseline for where their security sits.”
And so they made some changes and those changes are reflected. I mean, In 2020 there’s 18– 18,375 vulnerabilities identified. And this year we’re right on track to be around 20,000 vulnerabilities identified.

[00:14:37] Camille Morhardt: Is that going to continue in perpetuity or are we going to platform out at some point or plateau out?

[00:14:49] Katie Noble: No, I think it’s going to continue. So the thing to keep in mind is that the CVE database, right, the CVE Master List is really just a reflection of the known vulnerabilities. So there are an infinite number of vulnerabilities out there. And when I say that I really want to drive home like that is not someone’s fault. That is not because there is a bad product. That’s not because there’s not a maturity level that’s involved in that product. There’s so many ways that CVEs are taken out of context. CVEs are just a, they’re just an identification there just to let people know that there may be a problem and that they should take a look at their systems to make sure that there isn’t a problem.
So there’s an infinite number of vulnerabilities out there. And as the ecosystem changes from being traditional software and hardware to IOT, medical devices, industrial control systems, the integrated nature of the world we live in today is changing the way we look at cyber security. I mean, my Fitbit could have vulnerabilities and that’s not something you saw 10 years ago. So there’s an explosion of face where there could be vulnerabilities. And so I think that we’re going to continue to see a rapid increase in the quantity of vulnerabilities that have been identified. And that’s why it’s so important to have that community based approach, those CNAs, those people who are sitting there cataloging vulnerabilities in their systems.
The CVE program wants to have more vulnerabilities identified faster, better coverage for the customers at the end of the day. And they do that by onboarding CNAs, but getting more people, more organizations involved,

[00:16:21] Camille Morhardt: Right. You’re doing crowdsourced ability to post, but with some kind of guardrails as to who can join the community. If I did have a problem with a sports health tracker on my wrist, I have never heard of CVE as a consumer. I’m not going to go look at this. So am I expecting that manufacturer of my home product to check into this kind of thing for me?

[00:16:44] Katie Noble: Ideally. Yeah. As the consumer, you want to put pressure on your product manufacturer to build a secure product. In the community, the CVE program is fairly well-known. So a lot of vendors, if you, if they build a product technology vendors are very aware of the CTE program. It’s written into several international standards that vulnerabilities have to be cataloged within this way and including the United States standards for vulnerability mitigation and risk management talk about CVEs.
Not every company that builds a device that is connected to the internet is a CNA. We saw this very, very heavily in medical devices in about 2017. You’d have all of these medical devices that their heart rate monitors or their insulin pumps, or they make devices that are not traditional cybersecurity devices. We see it in IOT all the time, and then they strap a wifi module on it and now all of a sudden it is part of the internet and that makes it vulnerable to attack. If you can attack that insulin pump and you can cause an insulin pump to dump all the insulin in one minute, you can kill a person. That is a frightening vulnerability and those kinds of real-world sort of impacts they’re not theoretical anymore. They’re very real today.
I mean, think about things like the colonial pipeline or the water plant in Florida that was recently attacked. Like these things have been going on for a long time, but now we’re seeing a lot more cyber attacks. They’ve increased rapidly. We’re seeing a lot more vulnerabilities be leveraged. It’s really, really important that on the defensive side, we have a way to communicate and a way to defend those networks against a bad actor. And CVE is not, it. It’s not the end of the world. It’s one part of an overall well-balanced risk management program. Right. It’s just one little part, but it’s the definition. If you don’t have the definition, how do you do anything else?

[00:18:41] Camille Morhardt: I can see if you’re a responsible company and then maybe even in particular, you’re a medical device manufacturer like that you just expressed, you know, how scary would that be? And also just your own personal ethics of having people at risk, but what’s the incentive? Maybe I’m not kind of a functional safety type of company. Why do I really want to let the world know that there’s a vulnerability? Maybe not that many people know about it. Am I inadvertently causing a problem for my customers by detailing?

[00:19:11] Katie Noble: So that is a thought process that I have seen a lot. It is really uncomfortable to tell people where your problems are; it’s uncomfortable and it’s off-putting, but really sunlight is the best medicine. When you disclose vulnerabilities, you make the overall ecosystem stronger and better and smarter. And the customers at the end of the day are getting a lot smarter about this. When my mom knows what a vulnerability is, and she’s 70 years old, like. we’re getting there to where this is like a common sort of conversation that’s happening across the ecosystem.
And what customers are looking for is they’re looking for their technology vendor to have a mature vulnerability disclosure process, to be transparent, to be upfront, and to really understand that it’s not necessarily anybody’s fault, but we want to make the product stronger and better. And we do that by being in the ecosystem and being honest and upfront and saying like, “this is where the vulnerability is and these are the steps that we think that you should take in order to mitigate that vulnerability.” And that kind of level of responsiveness it’s being demanded by customers as it should. So I think most companies understand that and are driving towards how do we make our products more transparent and will this one of the drivers behind, behind becoming a CAN is that if there is a product that is connected to the internet in a vulnerability is found in that product and you are not a CAN, that means that if a researcher finds it, they can report that to another CNA or ultimately to MITRE because miter runs the, is the secretary of function for the CVE program. So they are a CNA and they cover all the abilities that are not covered in an individual scope of a CNA. So for instance, if you find a researcher, a hacker finds a vulnerability in a product that does not have a CNA, they can take that vulnerability to MITRE and MITRE will issue a CVE for it.

[00:21:04] Camille Morhardt: Or if I’m a small company maybe, and I don’t really want to tell my competitor that I have a problem for them to write it down because they’re bigger and have us in their CAN, I could go to MITRE directly and say, “I’ve got this problem. And I’d like to catalog it through you?”

[00:21:20] Katie Noble: So you could, but it’s probably in your best interest to just become a CNA. I mean, ultimately the vulnerability is going to be identified. It’s going to be identified through your program and if you don’t disclose it to begin with and a researcher discloses it before you, how does that look?

[00:21:37] Camille Morhardt: That’s not restricted then you’re saying? You just—what’s the process to become a CAN?

[00:21:42] Katie Noble: In order to become a CNA, you contact the CVE program and say that you would like to be a CNA. Now, there are some rules about who can become a CNA. It does have to be an organization that has the resources to be a CNA. And so an individual person can’t be a CNA, but a product vendor who has a well-defined scope and a program can become a CNA. And all you do is contact the CVE program. So Google “CVE program,” and there it is, it should be right on the top bar, “become a CNA.”

[00:22:09] Camille Morhardt: Because this is moving in a model toward this kind of crowdsourced security, do you have a perspective on where that’s headed? And how do you see keeping up with it and in is crowdsourced security the answer, period. Or is there another evolution or another wave that it needs to come?

[00:22:58] Katie Noble: I don’t think that there’s any one answer. I think as the ecosystem continues to evolve, it’s everybody’s response. Security is everybody’s responsibility and I’m saddened by the fact that overwhelmingly when security and convenience come together, convenience often wins. But I do think it’s everyone’s responsibility. Security is everyone’s responsibility all the way down to the end user to setting that wifi password on your router and making sure that you have strong passwords all the way up to product vendor, who should enable strong passwords by default.
I think that it’s everyone’s responsibility from the user to regulators, to businesses, to clients, consumers, to demand that the ecosystem be as strong as possible. It’s not something we’re going to get to overnight. The world is changing in a very rapid way. And it’s kind of, we’ve been saying the world is changing in a very rapid way for 30 years, but I do think that there is a huge change in COVID changed a lot of things for a lot of people from, with work, from home to the integration of smart houses and smart systems. I just think the ecosystem is exploding and we all need to take a stand on it and we all need to accept that we ourselves have responsibility in this ecosystem to strengthen ourselves and strengthen our neighbors.

[00:24:09] Camille Morhardt: Well, Katie, thank you so much for coming on today. I learned a lot. I think it’s very interesting and kind of just fascinating from this more end-user consumer perspective, all the things that are going on behind the scenes to track and define and post and become transparent and make sure that the community is moving in a more secure direction.
Thank you for participating in that and running it. Really appreciate the conversation today.

Katie Noble: Yeah. Thanks. It’s wonderful.

More From