Tom G: [00:00:40] Hi, I’m Tom Garrison. Thanks for joining me today for Cyber Security inside. As always I’m here with my co-host and colleague Camille Morhardt. Hi, Camille, how are you doing today?
Camille: [00:00:50] I’m doing well as usual.
Tom G: [00:00:51] I’m actively planning my weekend activities even though it’s not Friday yet, but it’s the fun thing to do this week.
Camille: [00:01:01] Yeah. It’s not even the end of the day, Thursday yet. (laughs)
Tom G: [00:01:06] (laughs) You know, nobody’s ever accused me of not being a planner. Um, so what, what sort of security topics are on top of mind for you today?
Camille: [00:01:16] Well, we spend a lot of time sort of thinking about attack vectors and cyber security, and I’m interested in diving a little bit more into insider threats.
Tom G: [00:01:31] Hmm. It’s a fascinating topic because you have two types of insiders. You have the, the, the bad guy, you know, the person who’s knowingly trying to do something to steal or to somehow hurt the company. And then you have the unwitting insider who doesn’t even realize that they’re doing something wrong.
Camille: [00:01:53] Yeah. And I’m interested in, you know, how do you track them down, be they insider–witting or unwitting? And what do you do about them? And even just how much time should we really spend worrying about this?
Tom G: [00:02:07] Yeah, I certainly think it’s, it’s one of those really difficult problems because now you’re dealing with human behavior and whenever you introduce human behavior into the equation, uh, it gets very, very complicated, very fast because you can’t usually predict with any level of accuracy what’s going to happen.
So, yeah. Um, no, this is a good one. So I think we need to find somebody who comes from the security industry, who, who really knows this topic and can dive into it with us.
Camille: [00:02:39] Yeah. Somebody who’s even tracked down insider threats or looks for them.
Tom G: [00:02:44] Yeah, that sounds, that sounds fun. Let’s do it.
Our guest today is Dr. Eric Cole. Eric was a guest on a previous podcast where we talked about SolarWinds. But to remind you, he is a CEO and founder at Secure Anchor Consulting, helping businesses improve their cyber security. He started his career as a hacker in the CIA, and then moved to the defensive side of cyber security. Today, he works with a variety of clients from fortune 500 to top international banks. He develops and teaches his own cyber security courses and is an author of six books and has a new one being released in May of this year. So welcome back to the podcast, Eric.
Eric C: [00:03:29] Thank you for having me.
Tom G: [00:03:32] Yeah. The last podcast we did on SolarWinds was fascinating. And I think today’s topic around threat detection and specifically insider threats is going to be really, really good. Interesting. So, you know, you, you have background in the CIA, which I just think is fascinating. There’s probably all kinds of really interesting things that you can’t tell us, but, you know, in the context of things you can share with us, can you talk a little bit about threat detection–how it works and how it should work today? And then we’ll dive in later into the insider threat.
Eric C: [00:04:05] Absolutely. So when you’re looking at cyber security, I sort of break it down into two broad categories, prevention and detection. And of course, there’s going to be somebody listening on the show going, “what about forensics and incident response?” Right, but, but I sort of group that under the detection piece and most companies want to focus all their energy on prevention–on stopping the adversary. The problem is you can only prevent things that are 100% bad, 100% of the time. Which means if something is bad, 90% of the time, you can’t prevent it because that would be blocking 10% of legitimate traffic.
So your false negatives attacks you’re missing on preventive technology is very, very high. So that’s why you have to bring in detection to be able to catch those things you’re missing. And then here is where a lot of people miss it. They put all their energy on incoming traffic. So anything coming into an organization, they’re going to try to prevent and detect attacks.
Well, the problem is the inbound traffic is noisy and it’s hard to see what the adversary is doing. So the secret to all of this is inbound prevention and outbound detection. So you prevent what you can have coming in and then you outbound proxy, everything leaving to detect the outbound traffic.
Tom G: [00:05:24] Yeah. I know, you know, detection and all the years that I’ve been working on security detection is one of these just beasts, because you end up with a tremendous amount of noise in the system. You either have, as you said, you have legitimate traffic that gets flagged as a problem, or, uh, obviously for certain classes, you want to be able to detect the things that shouldn’t be happening and yet, for whatever reason, you’re not detecting it. It, it is certainly a problem, but false positives. Are in detection are a huge problem because at least my experience says what, what ends up happening is it’s like you’re training the IT department to ignore the signal. When all of a sudden, you know, the flag comes up and says, “I have a problem.” If, if, uh, if that flag gets thrown too many times and it turns out not to be true, pretty soon you train the people to do ignore the flag.
Eric C: [00:06:22] I call it the car alarm issue, if you’re well, when we used to be able to go to malls, right and you were walking through a mall, if somebody’s car alarm was going off, what did you do? You just kept walking. You didn’t call the police. You didn’t because they go off with such high frequency we become numb to it. We totally ignore it. And that’s the problem with detection.
So the way that I always do it is it’s gotta be driven by your resources because if your resources can only handle 200 alerts a day from your detection system, then you only should generate 200 alerts. The problem is our clients are generating 15,000 alerts yet their staff can only handle 200 a day. And I think if you run the math, you can see that’s going to get out of control very quickly.
So you need to tune down the false positives and focus on the big attacks. Now people always push back and go, “but Eric, if you’re tuning your system to only alert on 200 attacks cause that’s all you can handle, you’re going to miss attacks!” Yes. But today you’re missing all of them. So I would rather catch the 10% that are most significant than miss 100% because of the noise.
Camille: [00:07:32] I’m wondering just what are the different kinds of categories of threats and how do you structure that analysis? You talked about prevention versus detection. We referenced, you know, insider, uh, but how are you classifying the different kinds of threats and then looking for them?
Eric C: [00:07:53] Excellent question. So if we start at the top, uh, you sort of have two branches. One is what I call operating system or host-based, so let’s call them server-based. And the other one is network-based. So a server-based threat is then broken down into two categories, malicious code impacting the operating system that runs every time you restart the system or something that impacts the application that is running in memory and when you turn off the computer, it goes away.
So just on that simple example, any malicious activity that’s impacting my operating system that’s going to run every time I reboot my computer, that is going to be a much higher priority than if somebody breaks in impacts an application and when I turn off my computer, it goes away. So it would be impact to the system.
Then on our network-based attacks, there’s internal lateral movement where you’re moving within the network. And then there’s command-and-control channel, where you’re leaving the network going back to the attacker. Once again, the command-and-control channel is more impactful than lateral movement. Now this is where world-class security engineers get themselves into trouble because they can’t but help themselves but say “But they’re both important!” Yes, they’re both important. But if you can’t do both, greatest good. You sometimes have to make sacrifices. So with our clients, I’m willing to let the low stuff go and focus on the high damage.
Tom G: [00:09:27] Eric, I’m kind of fascinated in this whole sort of human element of threat detection. And the thing that fascinates me is you have most employees, by far, you know, the vast, vast majority of employees are doing the right thing, or they’re trying to do the right thing. And then you have in some cases, a bad employee. So you have to sort of populations, you have the vast majority of the population is good employees trying to do the right thing. But even in that population you have, of those employees, they may inadvertently be doing bad things. So their intent wasn’t bad, but their action is bad.
And then obviously in the small population of employees that are trying to purposely do harm, you have, you have that group. So when you think about the insider threat, how do you approach those two populations?
Eric C: [00:10:20] So I break it down into two categories. I call it the “malicious insider threat” and the “accidental insider threat,” cause both could cause damage. The malicious insider, that’s the one that’s going to deliberately cause harm. And they’re usually two categories. One is disgruntled employee. They’re just mad at you. You didn’t promote them. You didn’t hire their family member. They’re just angry at you. And they’re typically destructive once in a while. They might give or sell information to a competitor, but they really just want to hurt and damage you.
And then the second type of malicious insider is what we from the agency days we call a “recruit.” And this is where a competitor or a foreign adversary would recruit them to deliberately steal from the organization. And those cases, when you’re talking about the deliberate malicious insider, because they know they’re causing harm, they’re going to cover their tracks. So in that case, you really have to focus a lot on prevention. Limit the access that they need to do their job. If you go in and look at Edward Snowden, when you do the analysis post-mortem, 80% of the data that he stole that harmed this country, he did not need access to, to do his job. So if there was better access, control, measures, and preventative measures in place, those attacks can be minimized or reduced. So the deliberate insider is all about prevention, controlling, limiting access. Go with the principle that we call “least privilege” only give people the absolute minimal access they need to do their job.
On the accidental insider–the one that is thinking they’re doing good, but inadvertently causing harm–that’s where detection is powerful because they don’t know they’re doing harm, so they’re not going to try to cover a hide themselves. They’re not going to try to go in and delete log files or do things to make it hard. So in that case, that’s where clipping levels or anomaly detection where you build patterns of normalcy of their behavioral patterns. And as soon as they start deviating from those normal patterns, you would set off alerts to be able to detect the malicious activity.
Camille: [00:12:34] We were chatting with, uh, like a human factors engineer who was talking about sometimes we inadvertently set policies to increase security that then classes of people aren’t following within the organization. I think the example she gave was, uh, if you prevent everybody from say using a USB key, that might be fine for most of the people in your organization, but chances are the entire marketing department, you know, is going to go around that because they have to do it for their job. How do you go about setting the spouse boundary between convenience and security?
Eric C: [00:13:11] Okay. Excellent question. And that’s actually a great example. We see that all the time where organizations are banning USBs, and you’d be amazed at what people go through to bypass that. But, but to me, what it comes down to is understanding the environment, understanding the jobs and never say no. As a security professional when I was trained at the CIA, they always trained me, “Eric, whenever anybody asks you, whether this is secure, always say, no, always say no. And most of the time they’ll go away and they’ll find something else.”
What I find today is if somebody needs something to do their job and let’s continue the USB, for example, and you just tell them no and block it, they’re going to do it anyway and just treat it as a covert mission. So what I would do in that situation is I would go to them and say, “Listen, what functionality do you need? Don’t tell me, you need a USB drive. Tell me what are the actions that are needed. So, okay, you need to be able to take 500 megabytes and be able to access that information from home because you have kids and you need to leave at five o’clock for daycare, but then you work in the evening and you’ve cleared that with your boss. Great. What if I can go in and give you a secure Dropbox? What if I can give you a way that it’s actually quicker, faster and easier than a USB–you don’t have to worry about losing it–where you can upload the files access from home. Will that work for you? Yes. Great. No, why not?” And then you, you go into this exercise where instead of just these rigid policies where you don’t understand the job, you ask them, “what do you actually need? And let me come up with a creative, secure way of providing it.”
Tom G: [00:14:48] When you meet with customers with your consulting business and so forth and you’re talking to them about threat detection and in particular insider threats, are there sort of tips and tricks that you use with these CIOs or CISOs that you can share with our listeners?
Eric C: [00:15:07] Yeah, the tips and tricks are, understand where the damage is caused and then use that information to build better security. So what I’m referring to is if you look at somebody’s account on their system, a normal non-insider threat, they’re typically going to access at most companies 20 to 30 files a day. That that’s just normal activity. If you look at how many files you’ve accessed today, or me, it’s, it’s usually somewhere between 20 and 30, that’s just how a normal person operates. Now, your company might be higher, it might be lower, but that’s sort of the normal activity.
Now, whether you’re a deliberate insider or an accidental insider, you’ll get to try to get access to as much information as you can. So now instead of the 20 to 30, you’re going to walk the data share. So you’re now going to go in and access 20, 30, 40,000 files every single day. Huge difference. I mean, if I give you the data between 20 to 30 and 50,000, you can see that.
Here’s the problem–logging that information logging every single object request permanently takes way too much space. It would use up all of your hard drive space and minutes. So here’s the creative way. I don’t need to store it long-term, I just need to tally it. So what we do with our clients is we said, “let’s log all that valuable data, but all we’re going to do is keep track of the number of requests. We’ll throw away the logs afterwards. We’re not storing them long-term so we don’t have a storage issue. And now when anybody exceeds 50, we set off an alert.”
So it’s once again, understanding what you need and coming up with creative solutions. But so many of our clients just go, “we can’t do it” and give up, instead of saying, “there’s a creative way to get this done.”
Tom G: [00:16:55] You know, and on the hardware side of our business that I’m in and Camille’s in, we have internal researchers where we do ha you know, we call them hackathons or, um, you know, other ways where we proactively sort of dive into our products and we try to find vulnerabilities. Is there such a thing that happens at a, at a business level where there’s a conscious effort to sort of try to, I would say in this case, let’s, let’s pick on the internal threats. Um, where people come in and they see what’s possible and that sort of almost like a hackathon type approach. Is that something that happens?
Eric C: [00:17:38] Yes, it does. And we sort of put that loosely under the category of what we call “threat hunting.” And the idea is assume your network is compromised and then find the adversary. So the way companies will do this is in the spirit of a hackathon, they’ll bring all their security employees in. Sometimes they’ll do it during the week, weekends or evenings and they go, “Okay, your mission is we are compromised. We have been given information from a source. We can’t reveal it. And we have an insider threat in our network. They are actively taking information and data that they shouldn’t be taking. Your job is to find them. Go!”
Now that the funny thing about this is the intent is fictitious–where they don’t know that’s true–but 80% of the time when we have our clients do this, it turns out it’s not fictitious and real. And they find an insider threat within the organization. Now we can you imagine the CISO’s face when they think this is a fictitious exercise and they actually find somebody who did it. So it’s one of those that here they think it’s just fun and in reality, it turns out to be very valuable.
Camille: [00:18:52] We have kind of been hearing it’s a more recent trend, but we have sort of been hearing these various ways of looking for and detecting insider threats. How do you keep up with emerging things to look for? Because you know, now that we’ve had this podcast talking about file files accessing an order, or two orders of magnitude larger, a smart insider is going to go solve that problem. So that that’s not happening anymore. So every time there’s more awareness and information, it’s like the next step goes forward on the other side. How do you stay current on these emerging tactics that hackers are using both from an insider? And I guess from a more broad sense, also.
Eric C: [00:19:36] the way that I do it is I don’t care about the specific attacks and threats. Those that you would never be able to keep up with that. What I focus on is root cause behavior. And what I mean by that is if you take anything, any insider threat or malicious code or anything, that’s going to harm an organization, it’s essentially going to do this. It’s going to get access to a computer. It’s going to upload code. It’s going to survive a reboot. It’s going to move laterally across the network and it’s going to make an outbound connection so the adversary can get back in. All malicious code that’s going to do damage to your organization is going to do that.
Then from there I say, “okay, what are the really damaging pieces?” The two most damaging pieces are surviving a reboot. If somebody gains access to a computer and they don’t survive a reboot, I’m less worried about that. To be honest with you, in the grand scheme of things, if we lived in a perfect world, I would worry, but in this world we live in, you gotta prioritize. If somebody breaks into a computer for three hours, you turn off the computer and they go away. I’ll take that. So I’m prioritizing saying surviving a reboot top priority. So I’m going to go in and look at what’s running every time you start a system to look for changes or alterations.
Then, ultimately, what I care about is outbound command-and-control channel. Somebody breaks into my network and they move around and they’re not exfiltrating data or modifying data or doing things to data, once again, not ideal, but I’ll take that. So then I also focus on the outbound. So when I come in and I build out the technician for insider threat, or I’m doing threat hunting, I’m going to focus in on what’s running on each computer when you reboot it and look for anomalies. And I’m going to look at outbound traffic to look for unusual patterns, because those are the two most damaging pieces.
How specifically the threat works, how specifically the exploit propagates, I don’t care because it would be too much work. I’m going to focus on those base core components. And once again, based on my experience, it works most of the time.
Camille: [00:21:50] Well, that’s a really interesting, um, that probably keeps you more flexible too, and more open to see what, you know, what might be coming because you don’t have a preconceived notion of the way somebody is going to try to get in?
Eric C: [00:22:03] Exactly because you have folks and the world needs all these different types of people who do all the reverse engineering and malware of all the code. But to me, what I always care about is protecting my client’s data. And to me, cyber security is all about detecting attacks in a timely manner, minimizing damage. And by looking at those two areas, I will be able to minimize damage for my customers and clients.
Tom G: [00:22:31] You know, the routine here. We like to have our fun segment on cool, interesting things that our listeners may want to hear. So what do you have for us today?
Eric C: [00:22:41] All right. So yeah, I tell you that, that was my best part of coming back on the show. I’m like, “I get to do another fun fact” Right? So it’s so awesome. So, so, so, so this fun fact, uh, is in 1998, Uh, the British rowing team, the crew team, they were preparing for the 2000 Olympics in Sydney, Australia. Now this crew, it was the eight-person crew team–they lost multiple times. So they had a little bit of an attitude and everyone was telling them all these different things to do.
What they decided was if they were going to win gold in Sydney, they needed to be the fastest boat on the water. So for two years, every decision they made was a simple question. Will it make the boat go faster? “Should we go out drinking with friends tonight? Will it make the boat go faster? Nope. Don’t do it.”
“Waking up at 5:00 AM to exercise, to build our muscles. Will it let the boat go faster? Yes, you should do it.” So now they’re obsessed. With a single question, will it make the boat go faster and believe it or not? I drive my coworkers and my family crazy, but that’s how I live my life. Will it make my boat go faster?
My goal is to be a great dad, a great husband, and build my company. So when I have decisions of, should we do this or should we do that? I asked myself a question, will it make my boat go faster? Will it help me achieve those goals? And if it does, I do it. And if not, I don’t. So if you’re struggling with too many things or not sure what to do with life, just ask yourself a simple question. Will it make the boat go faster?
Tom: Cool. So did they win the gold?
Eric C: They did it, it would be a terrible story if they didn’t (laughs). And if you like reading books, the book is actually called, Will It Make the Boat Go Faster? And it’s an awesome book. It’s more of a biography of the lead rower, but they embed these messages throughout it, which is amazing. I read it in one weekend.
Tom G: [00:24:39] Great, great. Excellent. All right, Camille, you’re up?
Camille: [00:24:44] Okay. So I heard this thing a long time ago and I just found out it was wrong and it’s been updated. So I’m getting a report on it. Um, a long time ago, I heard that curiously, even though we have a 24-hour cycle of day and night, that most humans really have a circadian rhythm of 25 hours. Isn’t that interesting?
Okay. So what I’ve found out is there’ve been more studies now in the last decade or so. And, uh, it turns out that when they were doing the studies that determined that people have a 25-hour cycle, they were putting people in an enclosed space without access to natural daylight, which is what normally sets our rhythm. But they were allowing people to use artificial light whenever they wanted to. So they didn’t have clocks, but they could turn on the light if they felt like it.
What we now know is turning on the, the artificial light extends your circadian rhythm. So if you don’t give people access to artificial light, it turns out that humans actually have a pretty narrow circadian rhythm of 24 hours and 11 minutes plus, or minus 16 minutes.
Tom G: [00:25:55] Wow. That’s pretty specific.
Eric C: [00:25:57] I’m noticing this is only my two times on the show, but Camille, you love the clock. Cause the last time you talked about, uh, the, the analog clock with all timers, now you’re talking about the clock. So, so I, I wonder if all of your clock based are these just coincidental? (laughs)
Camille: [00:26:14] I, I think they’re coincidental. Maybe you trigger something about timeliness. (all laugh)
Tom G: [00:26:19] I love it. So I am, uh, I’m. I’m going back. It’s a little bit of a history here, which I, I found fascinating. Um, so the comic book hero, Superman in the original comic book of Superman, he could not fly. He could only leap tall buildings. And so he would, you know, bend his knees and jump up wherever he was going to go. And then eventually it would cut to come back down again. And it wasn’t until later where the animators who were trying to animate this activity decided it would be a whole lot easier if he just took off and, uh, and was able to stay in the air. And so that’s actually how Superman got the ability to fly. It was the animators just wanted it to be easier. They didn’t have to worry about him jumping everywhere.
Eric C: [00:27:19] That is cool.
Tom G: [00:27:21] Yeah. So anyway,
Camille: [00:27:24] Shortcuts. Shortcuts leading to greatness in this case. (all laugh)
Tom G: [00:27:27] I mean, and, and anybody who says Superman’s not the best superhero, I, I just can’t even have a conversation with you. Cause I don’t understand. My son thinks Batman is the best guy and I just can’t, can’t fathom that, but anyway, Superman is the best.
But with that, we’re going to draw this podcast to a close. So Eric, thank you again for spending the time with us. It was a really interesting conversation on threat detection and insider threats.
Eric C: [00:27:55] Thank you so much for having me back on your show.