Camille M. 0:01
Joining me today is Lisa Bradley to help define Product Security Incident Response Teams or commonly referred to as PSIRT. Lisa is Director of Product and Application Security at Dell Technologies and she focuses on vulnerability response and customer trust there. She oversees the Product Security Incident Response Team at Dell. She’s got a PhD in applied mathematics from North Carolina State University and has led PSIRT programs for Nvidia and IBM, where, incidentally, she was showcased as one of IBM’s 26 Most Innovative Women. She’s also part of the FIRST I’ll say that it’s an acronym F-I-R-S-T PSIRT special interest group. And as a part of that she contributed to FIRST’s PSIRT services framework training and also the maturity document.
So Lisa, welcome. And can you describe incident response and PSIRT for us in under three minutes?
Lisa Bradley: 0:59
Sure. So as you said, PSIRT, commonly known as Product Security Incident Response Team is sort of a sister to CSIRT. Basically, when there are vulnerabilities already in the field and the customers’ hands and products. That’s where we come into play, our team helps to come in and adjust those vulnerabilities to make our customers safer and to protect brand reputational damage.
Camille M. 1:23
So you mentioned CSIRT as well. And can you tell us what that is?
Lisa Bradley: 1:28
Yeah. So PSIRT is the P for product, where it’s typically focused on product security. If an incident happens with a customer, or some kind of infrastructure type of issue within the company, the CSIRT team tends to be the one to lead that investigation, and just pulls in PSIRT if the incident happened because it was from a vulnerability in a product line.
Camille M. 1:50
Okay, so CSIRTis run generally out of an IT department at a company and focuses on a problem that happens, whereas PSIRT maybe run out of a product team or some sort of central security group?
Camille M. 2:06
Well, it depends on how your environment is. In Dell, all of the security assets are under the Security and Resilience organization. And with that we have product and application security. And then we have cyber security, which typically CSIRT falls under, we have physical security, governance, and so on. And I really like that environment, because a lot of things that impact, let’s say it’s an open SSL issue, which a lot of people are familiar with, because of Heartbleed, it would affect product lines, but it also would affect your infrastructure, or your cloud services and things like that. So all across the company, you actually have to make sure that you update your open source because of that.
Camille M. 2:50
Okay, that makes sense to me. So, let’s dive a little deeper. Can you back up and describe actually what is a vulnerability. You said, that’s really what PSIRT addresses?
Lisa Bradley. 3:08
Yeah, so a lot of times people get confused with a weakness and a vulnerability or they’ll say it’s a security defect or a security finding. Let’s say you’re going through your security development lifecycle, and you’re, you’re going through some processes to do scanning and, and things like that. There’s so many type of things out there nowadays to help try to get you to address issues before you even release. But sometimes you find a scan, and it’s code that’s already out in the field. And you say, “Hey, I found this finding.” The difference between the finding or the weakness really comes into play as if it’s exploitable. So if it’s exploitable by a bad actor to do something usually more than what they have the capabilities of that’s when it falls into the vulnerability phase besides just like a security hardening to harden something.
And the reason why PSIRT is so important, is because if that code exists already out in the field, you have to make sure that any of your supported versions that have that code, get the security update, because your customers are using it. And the other part of why PSIRT are so important, is the actual disclosure part and the communication. So disclosing to your customers to say, “Hey, we have a security vulnerability, we made it we did a security update to address these issues. And now please go get that.” So it’s not just a normal feature update, it actually has security implications and risks then the customers should take a more immediate action on it.
Camille M. 4:41
So in general in the industry, do PSIRT teams then measure themselves on having disclosed a vulnerability or are they measuring themselves on whether customers have adopted the update?
Lisa Bradley: 4:56
There’s no way to really know if the customers have adopted the update. So typically PSIRTs do the measurement on actually getting the security update out. And the time it takes from knowing about the issue where you could know about the issue because of your own SDL practices. Or you could know about an issues because a customer reports it, or vendors, for instance, you know, Intel reports their vulnerabilities to us so that we could coordinate and be prepared for them. And then also security researchers, people in academia, or people trying to make their claim to fame or anything to say, “Hey, I found something” and then they work with the PSIRT team to Coordinate Vulnerability Disclosure–it’s the CVD is a common term Coordinated Vulnerability Disclosure–so that we come in work with them to address the issue, and then put out an update before that researcher does a public disclosure themselves to you know, we don’t want to be zero dates, we really want to work with those researcher well.
Camille M. 5:55
So two different questions about that. One is, is there kind of a standard timeline, across the industry?
Lisa Bradley 6:03
I would say on a whole, I think the industry is sort of at a 90-day mark. But it’s really hard. We often get customers saying, “Hey, we want things addressed in this amount of time,” but there’s so many factors into it. Where are you on the release schedule? How long does it take you to actually reproduce the issue in to find it in all the different code bases where it’s at? that coordinated, get the build out, I mean, some of our product lines take a lot longer to build and test because there’s so many different pieces to it, especially when you think of appliance with all these different parts.
So I think that timeframe is a very difficult one to completely pinpoint. I’d say each company strives to meet certain timeframes and to meet our customers’ needs and to keep up with the industry and also to keep up with the researchers and when the reporting issues and how quickly we can resolve them.
Camille M. 6:56
What were the timelines, Dell obviously makes software and hardware. I’m wondering if the timelines for this sort of 90-days standard thing we’re were initially created for kind of software patches, not specific yet, but now we’ve got, you know, hardware companies who are able to do patches, but hardware obviously goes, you know, there’s a huge development period, it takes way longer, basically to make hardware.
Lisa Bradley: 7:22
Yeah, so I mean, I didn’t set the 90 days. Like I said, it’s just sort of been a trend that we’re starting to see in the industry as a whole, but you could definitely see having to react faster to things, depending on the situation. So I think when, when companies put how quickly they’re going to address, they take into factors like if there’s proof of concept code is there’s active exploits. You know, there’s just so many factors that come into play of how quickly you want to address something.
I would say most, the majority of hardware issues we try to solve through software updates. But you know, if we take the more famous one, that I’m sure you, you guys knew come eventually, in the conversation with me a Spectre and Meltdown, that obviously was going to take longer than 90 days. So even when Google Project Zero had their 90-day disclosure timeline, I mean, it just wasn’t going to work.
And there’s always going to be situations where it’s not going to work, where we potentially take longer to address the issue, because there’s just so many complications, and that certainly more in the hardware space, then you know, then in the software space, but it’s not unforseen in the software space, either.
Camille M. 8:33
I normally ask in these conversations for, you know, advice to kind of like CEOs, or similar C-Suite people at kind of Fortune 100 companies, but I think at this case, I would want your advice, since you do work with a PSIRT of special interest group and kind of for the industry, what advice would you have for smaller companies, you know, maybe thousands of employees, maybe not super small, who are trying to set up incident response or PSIRT or CSIRT teams, without maybe the resources of kind of a Fortune 500 company? Where do they start?
Lisa Bradley: 9:10
Well, I had that I worked for Nvidia for three years. So I definitely had that smaller company type of fields. So I completely understand, I would say, start with the basics. You know, have an email address, you know, something like secure@, security@ or even PSIRT@, so that researchers and customers and other people can get ahold of you if they find an issue. You know, utilize the first PSIRT services framework. Look through it. There’s also a Maturity Dock that says, “hey, if you’re going to start somewhere, start with these,” you know, and and figure out how you’re going to announce the security updates. I think that’s important.
Establishing, you know, a simple web page with this is how you get a hold of us. These are the latest updates that we’ve done in regards to security. You don’t have to solve everything overnight. But starting to make your model strategize how you’re going to do maybe your one year plan your two year plan your three year plan to be able to mature.
I think the biggest struggle that a lot of teams have is just a change in culture. So that the company really knows the importance of security, why it’s important to do it. And that security at times can be more important than doing feature updates. But that’s very difficult for teams to hear, because they think that profit comes from features. But lately, we’re seeing a lot of customers that won’t sign contracts or buy our product lines unless we have the right security practices in place. So security is somewhat a selling point, even though we don’t make any money to address vulnerabilities.
Camille M.: 10:49
Right, that makes sense. Do you think that, in general, more updates mean more problems from companies?
Lisa Bradley: 10:59
No, no, not at all. Actually, I would say if you were looking at a company, and they weren’t putting any security updates out at all, that there’s a problem. I mean, if you look at Linux, who has a lot of eyes at it, you’ll still, even once in a while, see 10 years ago issue that was there for 10 years, and no one saw it.
The key is, is that we keep getting smarter, we keep getting better at our development lifecycle, and our testing and all that and our skills. And we’re learning more and are actually our new generation is learning about security and design and how to hack and how to find issues. So I think that what we’re going to see is this continued trend of actually more vulnerabilities, until we go to a more of a flatline, but they’ll never go away.
If you’re an old company have older code that you need to go through. And if you’re a new company, your goal is just to go out there as quick as possible. You know, like Zoom, all of a sudden, they were trying to just get features out and get out there into the business, you know, then they got called out for their security practices not being as strong. That’s sort of what happens when you do startups and start somewhere. Do I want to say to every company that starting have security in the back of your mind, know that it’s going to be there know that you need to start early? Yes. But is that the reality when you you’re financially not set to do more? I get it?
Camille M. 12:31
Yeah. And that’s what probably makes your advice practical, I think. So another question. We talked about metrics, essentially, for a piece or team being how quickly you can address a problem or put out an update you. You talked about getting the disclosure, right? Are there other metrics that are used in the industry? I would think of something like maybe internal fines versus external funds or fines from products pre release versus post release? Or do we try to stay away from that kind of metric?
Lisa Bradley: 13:06
No, no, I think that there are metrics like that, I would say, the goal is, is that you, you as a company want to find them. The reality is, though is that others are going to find them to establishing a good working relationship with those researchers and having a good reputation of working well with researchers is really important. Because like I said, they’re always going to find things to.
I would say we, “we” meaning I know that we do for Dell, but the other players in the industry, definitely look at those things, how well are we doing with our own practices? You guys yourself did the Intel blog, and I actually utilize that a lot. When I’m talking about industry trends, what percentage of your vulnerabilities were internally found? And I think that’s a very mature practice that you guys are doing to be able to say, “Hey, we’re finding these internally, and we’re not afraid to tell you them.” A lot of other companies potentially, might try to hide away about the internally found issues, but those are just as much of a risk to the customers as externally found issue.
Camille M. 14:12
So just in general, could you talk a little bit about transparency? And I, you’re probably going to say that it’s important to be transparent based on what you just said. But aside from that, you know, how do you be transparent and mature about it? Because obviously, if you disclose something before there’s a fix, you know, that could be an issue. How do you kind of deal with all of that? How do you set your parameters?
Lisa Bradley: 14:38
Yeah, so I’m really in big into customer transparency, not only for things like our security advisories and security updates, but also in what our security practices are so that our customers know what our practices are and SDL and vulnerability response. I would say though, that you’re right though, you have to make sure that you’re doing the right things. You don’t want to break embargoes, you don’t want to zero day yourself or pre disclose.
And what’s interesting is a lot of customers will ask and say “we want to know about a vulnerability right away!” And it’s actually not a good industry practice. Because when you tell that one customer, you’re taking a risk by telling that customer than what if you tell that customer, you have other customers are their competitors. And as much as I’d like to say that, you know, everybody’s a good actor, it’s very much in the news lately of how internal actors have been part of things like this. So every time you share about an issue to somebody else outside of your small net trusted group, it’s sort of a risk. Making sure that you’re smart about that and, and not disclosing it, especially if there’s nothing a customer, or anybody could do about it. Now, if there’s a mitigation, that’s a different story, you want to definitely say, “hey, do this protection right away.”
I always say, let the company do the due diligence to look into the issue to make sure that they understand it to make sure that they’re addressing it and everything that’s supported, and then properly do the disclosure and communicate about the update.
Camille M. 16:11
You use the term zero day a couple of times, could you just clarify what that is?
Lisa Bradley: 16:15
It’s like, out there in the public. And I didn’t know about it, and I didn’t, I don’t have anything to address it yet. It’s that’s when the company really starts to go into panic mode, because the media starts picking up the press. That’s why it like I said, it’s really important to make sure if you’re somebody who starting off, to have an email address, to establish relationships with researchers to make sure they know how to get ahold of you, so that you could work with people before they want to publicly disclose something on you without having it patched.
Camille M. 16:46
Okay. And then my last question is just you had referenced you said SDL. And I was just wondering if you could explain how PSIRT integrates or runs parallel to Secure Development Lifecycle and kind of a high level of your definition of what that is?
Lisa Bradley: 17:03
Yeah, so actually PSIRT in vulnerability response is in the last part of SDL, so they’re part of the maintenance phase. So a lot of times you hear him a separate, but we’re really a child underneath full of SDL. So once you get into the maintenance phase, and you release your product line, or appliance or code line, that maintenance phase of how to maintain it, and keep up with the security updates is where PSIRT falls under. All the stuff before that is you know the idea of pre-GA code, or in hopes that it’s pre-GA code, when you find the issue code that’s new, you scan through it, you try to fix the issues before you release, and PSIRT relays the heart to say we released it, now we’re maintaining it, what do we need to do? So I won’t ever have to deal with vulnerabilities that are in code that’s not released in the field yet that is on the SDL side, and they drive to get that fixed before we even released the product.
Camille M. 17:59
Okay, so at that point, if you found an incident or a vulnerability, it would sort of by definition have to be internally discovered?
Lisa Bradley: 18:08
Internally, yep, internally discovered, because it’s all new code, right? Now, of course, sometimes when you run SDL, and you do some of the testing for us to a new code, you know, the scans are always getting better, you find issues that are already in code that exists in the field. And that’s when we sort of have a parallel track there. The new code needs to make sure it gets dressed before it’s released. But then the old code that’s already out in the field needs to make sure it’s good gets addressed, too.
Camille M. 18:34
Make sense. Okay, I lied. I do have one more question for you. Um, I’m just curious, what is sort of the biggest argument going on in the PSIRT community today?
Camille M. 18:46
I would say that there’s a lot I mean, time to fix you brought up is one of them. You know, software bill of materials is another one. Do we tell the customers all of you know what exists in the product lines? I think there’s comes with a maturity of that. Yes, I totally get it that I want to be transparent and clear, but, you know, I want to make the customers have trust said, I’m watching that and paying attention to it and doing this security updates.
I think the last one, because I’m giving you more than one, of course, is one of the ones that sort of hits me a lot is that when researchers find the vulnerability and work with you, I’m really all for them, publishing their work and taking pride in that they discovered something, I just would prefer that they wait a little while to actually do the proof of concept or exploit code. Because oftentimes, researchers will disclose and produce their papers or blogs or articles, at the same time that we’re actually releasing the fix to our customers. And I’d like to give the customers a little bit of time to actually address the vulnerabilities before that exploit code.
Go ahead, release, say you know, talk about the study, talk about what you found, and then say, “hey, in a few weeks, we’ll release the proof of concept code.” Totally go with that. I prefer that I keep pushing my plea. If I get my every time I get on my soapbox, I’ll take it.
Camille M. 20:12
This is your plea to the researchers. Okay. Okay. Well, thanks for joining us today at least I’ve really enjoyed the conversation.
Lisa Bradley: 20:20
Great. Thanks for having me.
Camille M. 20:24
Please stay tuned for the next episode of What That Means. I believe the next release we’re going to do is on Blockchain with Nic Bowmen