Tom G: [00:00:40] HI and Welcome to the Cyber Security Inside podcast. I’m Tom Garrison and with me is my co-host Camille Morhardt. How are you doing Camille?
Camille M: [00:00:52] I’m doing really well? I just experienced my first time on a SUP last weekend.
Tom G: [00:01:00] Uh, SUP all right. You need to, you need to educate us.
Camille M: Stand Up Paddleboard.
Tom G: Oh yes. I own one.
Camille M: (laughs) But you don’t know what it is?
Tom G: Well, I call it a paddle board. I, I, I’m not, I’m not in the lingo of it.
Camille M: [00:1:20] Oh yeah, no, I think it’s that. Is yours hard shelled or inflatable?
Tom G: [00:01:23] It’s a hard shell one.
Camille M: [00:01:24] I feel like the inflatables might be called SUPs more.
Tom G: [00:01:26] Yeah, no, I haven’t tried one. I’m here to tell you, especially if you’re near the weight limit for the, for the paddleboard, it is not that stable. So you get off just a little bit left or right. And boy, let me tell you, it’s a, it’s like a rodeo. So, you know, today’s topic we decided to focus around risk management and it’s a topic that–at the highest level, people really sort of understand the idea of insurance. We all have insurance for our cars or our house or whatnot, but when it comes to cyber security, that’s an area that I don’t think a lot of people have much knowledge.
Camille M: [00:02:12] Yeah. In fact, the first time I heard of it, really, I was talking to somebody–let’s see, how can I make [00:01:30] this vague enough to preserve confidentiality? I was talking to a person who is in charge of a utility, but not in the United States. And I was asking, what, what do you do with like ransomware attacks in your industry, when it’s public dollars, you’ve got tax money, essentially?
So are you just supposed to pay a ransom or do the best thing you can do or never pay a ransom on principle? And he said, “well, actually, there’s a lot of insurance now that kind of helps us address that in the public sector.” So I think it’s fascinating, you know, this is something there may be other reasons that our guest gets into as to why companies are looking at it even more differently than the public sector would in terms of, it’s not just do you pay don’t you pay, it’s kind of like do you report it don’t you report it.
Tom G: [00:03:12] Right. And the other element too, which I think is important to, to think about is as insurance becomes more and more mature within cyber security, what are the likely impacts to the rest of the industry in terms of what are the insurance companies going to mandate?
Like “if you want to be covered, you must do a, B and C.” And I look forward to how that progresses over time, because I guarantee you those insurance companies do not want to pay big payouts. And so they’re going to require at least a minimum set of requirements on platforms and servers and best-known practices and things in order to be covered.
It’s a great discussion that, you know, and I think maybe we should just to let our listeners enjoy and, and move on.
Tom G: [00:04:15] Our guest today on the podcast is Malcolm Harkins. He has over 30 years of experience in the tech industry. Most of it focused on security. As announced at Black Hat. Malcolm is now the chief security and trust officer at Epiphany Systems. He currently sits on the board of the Cyber Risk Alliance, as well as Trust Map. He’s also an advisor to several security startups.
He was previously the Chief Security and Trust Officer at Cylance and the VP and Chief Security and Privacy Officer at Intel. He’s the host of Security Speaks a podcast focused on having real and raw cyber risk dialogue with practitioners. And he is a return guest to our podcast.
So welcome back, Malcolm.
Malcolm H: [00:05:02] Hey, thanks Tom. Happy to be here.
Tom G: [00:05:04] Our topic today focuses around cyber security insurance. And I don’t think a lot of people, myself included are really deep on the topic. So I’d love to maybe just start with a little bit of a background at the high level. What is the idea behind cyber insurance?
Malcolm H: [00:05:25] Like any insurance, whether it be automobile. Property and casualty insurance, you know, business interruption insurance, there are real issues that we’ve all been experiencing in cyber for organizations they’ve had financial impact. An insurance policy is a way to, in essence, pay premiums to mitigate some of [00:05:00] the potential financial impact of either a business interruption, a lawsuit or expenditures related to responding to a cyber event.
Tom G: [00:05:55] The basics are whatever costs you incur as a result of the cyber attack, you would have insurance again, just like your house burns down, you have house fire insurance. But what are the kinds of expenses that typically are covered in cyber insurance?
Malcolm H: [00:06:15] So wide variety of them. So in some cases, the upsurge and increase we’ve seen of ransomware. In some cases, it would be the equivalent of business interruption insurance–so what downtime and financial implications, because maybe your point of sale systems were ransomed and your revenue was interrupted. There might be insurance clauses related to covering perhaps the ransomware payment. There would be clauses if you had a privacy breach, related to any lawsuits or the cost and expenditures for credit coverage and, and media and those types of things to let your customers know that you didn’t handle their data in a way that protected it.
So there’s a variety of different clauses with a lot of different conditions that are generally in some of these policies, depending upon what you’re trying to ensure against.
Tom G: [00:07:13] Is this something that is generally larger companies all have these policies or is this still kind of a nascent or, you know, only some companies have it?
Malcolm H: [00:07:20] I would imagine the smaller organizations probably don’t. But if you were certainly mid-sized and above, there’s a decent probability you do. Now when you get into very large enterprises, even, you know, when I was at Intel, Intel like many organizations that sit on billions of dollars in cash, there’s a level of self-insurance and use that as a mitigation. It’s probably, I would say at least 30, maybe 50% have some type of what I’d say, cyber security insurance clause. Now how much of that would cover ransomware or a data breach versus disaster recovery issue? Right. I had a fire in my data center and that affected my operations. So there’s a mixture of when you widen it out to cyber insurance.
Tom G: [00:08:05] We all are familiar with the automobile insurance or a home insurance, those kinds of things. And those insurance companies do play a role in terms of what they require in order for you to have insurance, you meet certain requirements. Does that sort of thing exist now?
Malcolm H: [00:08:25] There’s still a little bit of wild, wild west, but it’s still getting dialed in, but there’s a lot of exemptions in pauses in there. Like if you didn’t patch the system, if you didn’t have appropriate identity and access management, if your antivirus wasn’t up to date that it starts reducing the coverage or their exclusion clauses, that would mean you don’t get covered because the way in which an insurance company makes money is to collect premiums and pay out as little as possible.
Got to look at it from, are you doing what is in the terms of the contract of the coverage? And if you are, then generally they’ll pay out, but there’s also been a number of lawsuits back and forth between those who didn’t get covered and then Sue their insurance company saying, “Hey, where’s my money?”
Camille M: [00:09:15] Some insurance policies actually cover payout of ransomware. So is there kind of a mutual evolution of, “oh, well then let’s increase that ask because companies that might otherwise have gone out of business, if we made the ask now have insurance coverage?”
Malcolm H: [00:08:30] There are cases where the insurance companies have in essence, been the response coach to certain events. Where somebody has a cyber insurance premium, they’ve got a connection with the law firm and the insurance company helps coordinate how they’re going to respond to things because again, the insurance company might actually want that control in order to minimize mis-steps in order to minimize payout– not only for a company that might be lacking in maturity from having similar events in the past, and they don’t know how to deal with it, but it’s also useful for the insurance company to help guide their customer on the appropriate actions to take and when tot take them.
Camille M: [00:10:15] Isn’t this a little bit weird though. I mean, I would think the police maybe would be involved, but negotiating with a criminal, you now bring in an insurance company as opposed to FBI or something?
Malcolm H: [00:10:28] I would venture to say 99.99999% of all what I’ll say are “system compromise”—because breach is a legal term that generally has a implication for privacy. But I’ll say compromises of systems and organizations never get reported. And then very rarely get reported to law enforcement for the simple fact that in one case it might not be material in terms of the impact. It just might be a nuisance. So it’d be like reporting for graffiti and you might not always do that because it’s a new nuisance. But even if it is material–or potentially material–a company might want to maintain control over the investigation in order to limit their liability and stay in control of the investigation versus have law enforcement come in with an unknown set of motivations and start doing things or seizing systems or collecting evidence that could disrupt the business.
So it’s, it’s different than a physical product. And I don’t know, though, there has been some proposals around mandatory reporting including some stuff that’s come out recently–work for the federal government, critical infrastructure. So there’s a gray area as to whether or not you have an obligation to report other than truly in the privacy space, or if you’re a public company it’s material or significant to your investors.
Tom G: [00:11:50] It seems to me like of paramount importance to the company is a) the company’s reputation, but also b) wanting to make sure they understood what actually broke so that it allowed the attack to even happen in the first place. And you don’t want that to be publicized in any way until you are certain, you got to that issue resolved. And then you also mentioned that the vast majority, I think you said 99 and a lot of nines after the decimal point, don’t get reported. Can you speak a little bit more about that? Like the motivations for not reporting?
Malcolm H: [00:12:30] Scaring shareholders, alerting your customers to something too early or premature? It could be again, timing perspective. You might be having a big product announcement or announcing an amazing quarter from a revenue perspective. And so, you know, there’s all these sorts of motivations within an organization that you’re trying to protect the brand. You’re trying to protect your shareholders. You’re trying to protect your bottom line.
If you look at it, privacy breaches have a specific legal requirement and legal definition. Intellectual property. I would venture to say there’s a lot of companies that have had a lot of intellectual property that even if it wasn’t demonstrably seen that it was exfiltrated than stolen, but you know, those systems were seen and touched by a bad actor, which might mean your intellectual property is compromised or a trade secret is compromised.
But you would start doing the wiggle room perhaps with attorneys to say, “well, do we know that they actually got it?” So you’d have to prove the negative that they didn’t before the legal team would allow you to report it. Those are all the barriers in place, which is why you very rarely–other than a privacy breach or ransomware attack, that’s already obvious to the public–see an organization come out and say, my systems were breached and data was stolen. Doesn’t meet a clean legal definition and you can play with the optics of materiality. An organization will do that and not report.
Camille M: [00:14:06] To pay or not to pay. That is not the question. The question is, what impact is this going to have? And is it preferable for me not to report it potentially. Especially if I have insurance covering in.
Malcolm H: [00:14:18] In some cases, it would be preferable. I’ll go back to my Intel days, public knowledge. So I’m not saying anything that’s not out already in the public domain. When Google came out and mentioned the Aurora breach was in January/February of 2010, that attack started in 2009. Google went public with it. Intel was to this day I still think the first company that appropriately reported it in our financial filings, 10 K or 10 Q or something like that. And we’ve described at a general level that an attack had occurred and we, you know, had appropriately ascertained that we thought we had an obligation report under Sarbanes-Oxley cause the potential materiality impact when we were a public company.
When I went to RSA that year, a few weeks later after that. half of my peers in the industry and the Fortune 500 we’re not happy with me that until I’ve done that the other half said “that’s absolutely the appropriate way to do it.” So even, you know, 10, 11 years ago, that was how the calibration was in terms of whether or not Intel had done the right thing. I believe we’d done the right thing. I know Paula Otellini was CEO at the time, believed it. And I still think we have that type of what’s-appropriate-what’s-not-appropriate to report–and the SEC and stuff has provided it this whole additional guidance around it. But I still think too many organizations are optically playing with the lens of risk in order to not report when they probably should be reporting to their shareholders, the potential implications.
Tom G: [00:15:59] Do you think that it is likely some of the things that are likely going to come from more prevalent coverage by insurance as we move forward, that there are going to be changes in terms of what they mandate? Like, for example, you had mentioned before that they have rigorous processes to update their platforms and they do it in a timely fashion, right. That will be something that needs to be in place in order for you to be covered by insurance.
Malcolm H: [00:16:30] In some cases there’s tools that the insurance carrier will want you to know. So want to see a penetration test, they’ll want some routine updates in essence, on the health of cyber. Now, on the one hand, you can say that that’s going to raise the bar.
In some cases, it probably has. There’s been some articles recently back a few weeks ago, one in SE Media that said there was scant evidence that the cyber security insurance marketplace has had a positive effect or raise the bar on security. And I think that’s probably just because of where we’re at in the lifecycle of insurance companies in essence mandating or making those requirements of organizations.
But I think we’re going to see more and more of that.
Tom G: [00:17:15] I think this insurance thing is an interesting topic because it begs the question and, you know, in a different industry. So let me change the subject. It’s still insurance related, but it’s different than cyber. And it has to do with things like pet insurance. So for those of us older school, you know, they had dogs when we were growing on up, we brought the dog to the vet, for the most part it didn’t cost that much, relatively speaking. It wasn’t that expensive. You spent, you know, 50-100 bucks, whatever.
Now, people have pet insurance. I’m like passing out when I see how much it costs some people to have their dog go to the vet and it’s hundreds and some cases, thousands of dollars. I just wonder if there’s a dynamic that’s existed in other industries that we might be on the forefront for, as well, as actually inadvertently sort of driving costs up as a result of insurance.
Malcolm H: [00:18:20] You know, it’s an interesting thing. On the one hand, you know, again, insurance companies there to make money. Most people have always heard me say the cyber security industry profits from the insecurity of computing. So the more risk that occurs the security industry revenue grows at a macro economic level.
I think the potential danger putting on my hat is looking at some of the security players who have dated, aged architectures and approaches to control. And their connection with the insurance industry to promote using that data architecture and then ensuring away the potential financial implications. That is probably the biggest risk that we have for adding costs and adding risk. And those two players would profit while shareholders, stakeholders, and customers would suffer.
Camille M: [00:19:12] So staying more nimble working with insurance companies and security officers to understand what to do to protect as best you can would probably be your advice?
Malcolm H: [00:19:23] Yeah, it would be. And I think the other thing that that’s a trap that we’ve been stuck in for a couple of decades is the way in which we look at risk. Risk, the way in which we’ve been doing it for at least as long as I’ve been in this industry, a couple of decades and probably longer is risk as a function of three things, threat vulnerability and consequence. And that combination is what drives the risk equation.
But the thing that is absolutely true is being vulnerable doesn’t mean you’re exploitable. There’s a difference between initial compromise to catastrophe. And that is the attack path. That is the exploits through the daisy chain of connected devices, identities, and systems in your environment. And what we need to be able to start doing is start focusing on where we’re exploitable and not just where we’re vulnerable, that will allow us to turn the dial on risk more efficiently–as well as more effectively. And that, if I was doing it and I was in a larger company and know cyber insurance, I would focus everything on.
“Yeah. We might have these things and there might be some level of vanilla broad-based vulnerability, but if I can demonstrate the exploitability is low, then it shouldn’t.”
Camille M: [00:20:42] So you’re talking about, it would be really hard for somebody to take advantage of this vulnerability. But what about, I don’t think you’ve addressed like redundant computer redundant or other alternative systems or some, like, if somebody compromised any one of my systems, I have a second kidney. How do you look at that kind of an approach?
Malcolm H: [00:21:02] Yeah, well, I mean, we obviously need redundant systems for just resiliency in our environments for everything from fire flood and internet outage, whatever the case may be, but it depends upon the risk issue, right? If I’m worried about a compromise and data theft, a redundant system doesn’t stop data theft. It just provides redundancy to that system not being available. So it’s an availability, risk issue.
Camille M: [00:21:30] Increases your risk. (laughs) Now you have two systems holding the data.
Malcolm H: [00:21:36] So you have to think about all these dynamics. Where do I need redundancy? Where do I have single points of failure? Where do I have single points of compromise? And how do those things connect along an attack path to the fulcrum point of a system compromise leading to, you know, the equivalent of a Colonial Pipeline or a SolarWinds, right?
So you have to look at it or an Equifax breach, right. That was an Apache struts vulnerability on an external facing website that led to one of the biggest data breaches in the U.S.
Tom G: [00:22:10] Do you think insurance is going to help with that?
Malcolm H: [00:22:15] I think they will help push some level of hygiene and corrective action at the broad line. I think unless the insurance industry starts moving towards, we want to look at where you’re exploitable, not where you might be vulnerable and have a system compromised, but where that fulcrum point of risk is the exploitability because again, the attack passer these daisy chains of all these things. So one system, one, a relevant user clicking on a link and saying that was thing that made Colonial Pipeline to happen–a breach password without MFA? No, there’s this daisy chain of all of these things that occurred. Where did that happen that then led to a massive ransomware attack? There’s a lot of connective tissue and without understanding that connective tissue and that exploit path, you’re going to be focused on the wrong thing.
You’re going to say “I’m going to patch all of these things and they’ll do all these things” and you’re still going to have a connection and a pivot point because you can’t eliminate risk. So how do I break and reduce as many attack paths as possible? That’s the way to manage risks.
Tom G: [00:23:27] There is sort of good hygiene that you’re describing that every company should be doing today already. There’s nothing really unique. I guess my question would be, with the addition now of cyber insurance, you have insurance companies that are absolutely not wanting to have big payouts. Do you think it’s likely that you’re going to have people that are very, very on what is risk and identifying those critical elements because it’s their business, they’re in the insurance business and that crawl through their potential customers when setting premiums and actually raise the bar for a company that before was, they thought they were doing good due diligence, but they just really weren’t to, you know, to the same level as the insurance company has got big money riding on the insurance?
Malcolm H: [00:24:20] Yeah, well broadly, I don’t think most companies are incentivized to do a lot, which is why we’re continuing to have the issues and people say they’re moving slower on innovation in this space than, than we should. I do think there are aspects of the insurance industry that, that will occur and they will be that precise because the payout, the premium the implications of it are so significant, they want to make sure that things are done right.
The challenge in the cyberspace and in the technology space in general, it’s different than insurance insuring my building from an earthquake or a fire, cause that’s a more static environment. You know, just like in even those events, there’s the force majeur. If a terrorist attack happens, insurance backs off, in some cases; it says, “Hey, we’re not insuring that.” If a nation state attacked and drop the bomb, “I’m sorry.”
And so there’s also this nuance that even in the United States where there’s been some suggestion and the prioritization of ransomware attacks to be investigated with the priority, like a terrorism, that now there’s some potential worry that insurance companies, if it is done that way and seem to be orchestrated by a terrorist group or a nation state that they’ll use that clause and say, “Hey, there’s a clause that says, Hey, this type of actor, we’re not insuring against.”
Camille M: [00:25:50] And you mentioned ransomware as a service or cyber crime as a service as something that has kind of evolved over the last, I don’t know, within the last decade. Would that be fair? Can you say more about what that is and what we should be on the lookout for when it comes to that?
Malcolm H: [00:26:10] So there are some studies that publish the price of renting a hack on a mobile phone, a hack on a device, buying an identity, or in some cases, a distributed denial of service attack against somebody’s corporate domain or organization. Those have been going on–I mean, again, I’ve been in this space for 20 years–but that stuff was starting to occur 20 plus years ago . And really kind of crime as aattack service started being thrown about as a term while when I was back at Intel. So again, circa 2006 2007, but I do think we’ve seen the acceleration of that because if you can make money off of it, get away with it and then allow other actors to basically buy a attack service. You’re not the attacker, you’re just providing some capability.
So there might be some things, depending upon the country of origin, who’s selling certain things that it’s legal and then the buyer who’s weaponizing it, going against the folks, depending upon where they’re at, it also might be legal. So you have this weird geopolitical nation, state location. Where did the attack occur? What infrastructure was it used as well as some of the attacks are leveraging other victims. Right?
Tom G: [00:27:34] The idea of the business of attacking I think is sort of a fascinating follow on topic. We probably can’t get into it today, but I think the role of insurers in this marketplace is going to be interesting to see how it plays out. There’s certainly, I think, probably more benefit than downside.
Malcolm H: [00:27:53] The insurance industry has done a lot on building standards and safety and fire prevention, improving car safety standards. So I think if we look historically broadly on the physical side, there’s evidence that the insurance industry has made a tremendous amount of impact on improving safety on sayings. So I’m hopeful that that will occur. And the, really what the outcome of the insurance industry’s engagement is in this area.
Tom G: [00:28:28] Before we leave, we have this segment about fun facts. I know that you’re a veteran of the podcast, so you know that we were going to ask you this. And what, what kind of interesting fun facts do you have to share with our listeners?
Malcolm H: [00:28:40] Birds are the last surviving lineage of dinosaurs. And it makes me think about it and go, are we in some cases in the way in which we’re approaching cyber security and our anchor point to the dinosaurs that have not innovated holding ourselves back.
Tom G: [00:29:02] That is a powerful image. I’d like to think we’re more on the bird side, but I guess that’s depending on who you talk to. So that’s a good one. Uh, Camille, how about you?
Camille M: [00:29:16] I just discovered a couple of weeks ago, a sport that did not exist before a couple of years ago, and it is called wing boarding. You stand on a little board with a hydrofoil under it. So the board actually comes up out of the water and your feet are, you know, you look down and the water is three feet below your board. You’re sailing on this tiny little, well, it’s not that tiny.
Tom G: [00:29:39] It’s a hydrofoil!
Camille M: [00:29:40] Right. And then you’re actually totally holding onto this inflatable wing, like almost like a paragliding thing. There’s no harness, you just have handles. And this thing, like, as you angle, it catches the wind and zips you away, you know, and you can just let go with one hand and hold onto this like neutral position, which in wind surfing is just kind of straight in front of you. And then it, it like the sail lufts and nothing happens, or you can pull it in and angle it correctly and just go shooting off the high rate of speed.
I got myself a lesson and did not achieve much speed or much time out of the water in the first couple of hours, but I’m determined.
Tom G: [00:30:26] I have seen pictures of it and it does look amazing. My fun fact is completely different, different topic. It comes to me, I have to source, it comes to me from my son and it has to do with the origin of eyelash extensions. Like why is that a thing?
Interestingly enough, there was a Roman author, which I’m sure I’m mispronouncing Pliny. P L I N Y the Elder. And he claimed that long eyelashes were a representation of chastity because he believed that women’s eyelashes fell out during sex. And so women were always wanting to, you know, portray this, uh, a wholesome image and whatnot. And so they were doing all kinds of things to extend their eyelashes so they didn’t look like they were sleeping around, which I thought was pretty fascinating.
The article goes on to say that in the 1400s, for a period of time, the opposite was true. Back then the idea of hair, and they were talking about your, you know, the hair on the top of your head was viewed as something that was sort of promiscuous and therefore not good. And so women were acts instead accentuating their foreheads. So they would do everything they could to make their forehead look bigger, which meant that they would actually remove their eyebrows as well as their eyelashes to try to make their forehead look as a, as, as long as possible. In the 1800s, it was even–this is like sadistic now–long eyelashes were back in vogue again, and women were actually sewing. They would numb their eyelash and then they would literally so human hair into the lid to make it long.
Camille M: [00:32:32] Now we have mascara.
Tom G: [00:32:35] Anyway, I thought it was kind of a fun little history lesson on eyelashes, [00:32:35] Malcolm with that we will bid you adieu, but thank you so much for coming in, sharing your background. Congratulations again, on your new role. It’s exciting. We look forward to talking to you again in the future.