Tom Garrison: So good afternoon, Camille.
Camille Morhardt: Good afternoon to you as well.
Tom Garrison: So, what kind of interesting thought provoking topics do you have in mind today?
Camille Morhardt: Well, I have kind of a philosophical question for you, ultimately. I was thinking back over the history of data security in the enterprise. And it really kind of began, I think with perimeter security, we just keep all the compute devices inside the firewall and we badge access our people and of course we don’t allow any bring your own devices and we should be safe. And obviously perimeters no longer exist. So here’s my philosophical question: is it actually possible that it’s safer for us to open everything up and monitor it or will it always be fundamentally safer to lock everything down?
Tom Garrison: Hmm. That is a deep question. You know, I think part of that question, there’s inherently a choice, which I might argue as a false choice and that is: is it even possible to lock things down anymore? I’m not so sure. Just given the way that work happens nowadays, you know, unless maybe you’re in some government lab or something where you literally can lock everything down. Just the nature of work now is very mobile.
Camille Morhardt: The other problem I might argue is even if you’ve locked it completely down ultimately, you’ve got a person working on something. And so, you know, you could have an insider threat, right? Even if that person has badge access is allowed and lives and breathes and sleeps there, never, never exits the room even, they could still, they could still be the threat, as opposed to—
Tom Garrison: Well, and you went the malicious route. I think The statistics are pretty darn remote when you talk about the malicious attacker. But then you have the ignorant employee who, unbeknownst to them, they do something silly and they click on something they shouldn’t, and now they’ve introduced a threat.
So there’s the malicious actor, yes. But then there’s also the actor that is just doing what they think is their, their job. And they make a mistake and they click on something they shouldn’t.
Camille Morhardt: Right, stuff has come, even if they have badge access and they’re sitting within a lockdown facility, if they’ve got an email, they’re a risk.
Tom Garrison: That’s right. So I I’m sort of walking myself into the space where the reality is whether you like it or not–with very, very few exceptions–is PCs, in particular, but even servers for to a large extent, it is kind of the Wild West. The devices, you know, they’re going to have to interact with people. Those people may have good intentions or bad intentions, and we have to be able to monitor and see when the device is acting in a way it’s not supposed to.
Camille Morhardt: I might even argue it’s a safer world that way. So it’s completely open, so now we have to worry about everything and everyone. As opposed to figuring we’re safe because they’re operating within the office and their device doesn’t leave.
Tom Garrison: Right. You’re certainly more diligent–on alert–if you have the mindset that says the device can’t be good, so you’ve got to watch it.
Camille Morhardt: Yeah, but what about the problem now that IT has where, if they’re having to monitor people’s even personal devices, because there’s also work happening on them, they now have access to data they don’t even want, right? They’re really just trying to protect IP of the company. But instead, now they’ve got access to people’s personal photos, which they don’t want to have.
Tom Garrison: True, you’re actually opening up a whole new can of worms around privacy. Because there’s the data element that you said whether it’s a work device that people use for combination of work and personal use, or it’s a personal device that people are doing some amount of work on, there is inherently going to be personal information–whether it’s photos or other things–that find their way into a device. And, and how does the company, the company doesn’t want to have responsibility for that. Yet you have regulations like GDPR and other things that the company is liable for protecting personal data.
So that’s a whole new level of complexity.
Camille Morhardt: Right. Or you may even learn something that you could use to protect somebody like say, Oh, they’re probably at a higher ergonomic risk based on we weren’t looking for that, but we are now noticing they’re using their mouse 20 hours a day. Now, are we liable if we don’t address that in some way?
Tom Garrison: Hmm. Wow. This is a, this is a deep topic. This feels like different topics that we’re going to have to dive into. But I think in particular, how we monitor without perimeters and somehow still maintain privacy, that sounds like an interesting topic. I think there’s something there. What do you think?
Camille Morhardt: I think that’s the topic of the day.
Tom Garrison: All right, let’s go for it.
Our guest today, Alan Ross, fellow and chief architect for Forcepoint X-Labs. Alan’s focused on understanding cyber behaviors from a human centric view. First of all, welcome to the podcast.
Alan Ross: Thanks, Tom.
Tom Garrison: I think it’d be good to start off with just a brief description for what does cyber behaviors from a human centric view mean?
Alan Ross: I’m glad you asked that. It’s sort of what brought me to Forcepoint in the first place I was at a startup and we were looking at behavior, but we were looking at more of the behavior of devices or network connections or things of that nature. And we started looking at what are some of the things that humans are doing, or the accounts that represent humans.
When I got to Forestpoint the mission is “to stop the bad and free the good,” which has always been our security mission. And throughout my whole career how we can enable the human to do the business they need to do, but also understand when their accounts or themselves they’re exhibiting behaviors that could be negative to the enterprise.
So when you think about it, it’s just really in the past we had– our network security ended at the perimeter of our building. Then we had firewalls and we had websites and we had laptops, then mobile phones and cloud computing in SAS. And we’ve been through this whole evolution, the past couple of decades.
And I remember that, actually it was Intel’s former Chief Information Security Officer, Malcolm Herkins, he coined the phrase almost a decade ago that “people are the new perimeter.” And, I think having had dialogues with Malcolm, really helped me evolve my thinking to say, “wow, I would really like to understand all possible cyber behaviors from humans”–to understand, which are bad, which are good, which are abnormal, which are anomalous and do that many different ways so that we can develop a methodology that will allow us to really understand when a user account or a device has been compromised or when potentially a user’s being malicious or when the user is unintentionally performing negative behaviors.
Tom Garrison: Interesting. So I think what I’m hearing is, you know, a simple sort of account with a password could be compromised. And then, if you’re not really paying attention to the broader context, you would never detect that there was a problem. But if you understood more about how a user typically behaves, you’d be able to detect that, “Hey, this person isn’t acting the way they normally act” and you could decide to take some action. Is that, did I get that right?
Alan Ross: Yeah, it’s actually a very accurate sort of representation of it. It’s important to know that not only have passwords have been compromised, but users love to click. So despite all the education and awareness training we do, users just love to click links. And so they get their devices get compromised.
And so how can we watch, not only just the user, even though it’s human centric, but the compute devices that they’re using, have they changed their behavior? Because that could be indicative that someone is taking control of their device. So maybe the device could even start behaving differently.
But back to the human centric, yes, you’ve got it right. How do I know that it’s still Tom sitting in that chair doing his competing in the morning, and then it’s not somebody else who is using Tom’s accounts, doing that compute.
Tom Garrison: This was a few years back, I was working with–it was actually a different company–working on the similar kind of set of problems and what they were looking at, or things like measuring the time between keystrokes and using that like a fingerprint. Or it turns out that the, if you, if you make the cursor on the mouse, disappear, that people try to find their mouse in very unique ways.
Like some people grab the mouse and go side-to-side sometimes they go around in a circle, but you put enough of these sort of challenges together. And in ways, by the way, the user never even knows it’s happening, that you can get a pretty good understanding if the person at the keyboard in front of the screen is actually the one that you expect, or if it’s somebody else. Does that fit into the same sort of general category of what you’re talking about?
Alan Ross: Yeah. And I think we can even step back to some of the most rudimentary concepts that should be when does this per se typically log on? How do they log on? You know, women have their typical work hours look like? Those are basic concepts that we’ve talked about for years. But if you’re collecting enough telemetry off the platform, you can not only say, “Oh, we can see that Tom logs in typically at like 7:10 AM. The first thing he does is fires up Outlook, cause he loves email. Secondly, he goes into Skype, so he can go find his employees and tell them to go do something. Third thing he does is opens up the web and checks the stock price for the day.”
You’ve got a routine. All humans have these routines. And if we build up enough time series data of log-ons log offs, applications that they use, the prominence of different applications, you know, how often do you try new things versus your peer group?
So maybe. You’re the only one that’s trying WhatsApp. That’s sort of incongruous with normal peer behavior. Usually it’s like, “okay, let’s all jump on Slack because it’s a cool new capability.” But if you’re the only one doing something it’s sort of anomalous from a behavior perspective.
So we really want to focus in on understanding over a long time series ahat can sort of manifest itself as quote unquote “normal.” And that can, as you alluded to get to how fast they can type how quickly they visit websites, how often they visit different websites. Uou know the mouse movement want to haven’t really explored, but it’s an interesting idea. And I think by looking at that gives us one sort of really good root of, of a foundation to build some of these long-term behavioral analytics.
There are also some of the shorter term things like, there are just things that normal users don’t do. So maybe we see that all of a sudden a new user account is created on your machine and added to your administrators group. Then an application is installed and a new services created and a new task is created and they map a bunch of network shares. That’s not characteristic of normal user behavior across an enterprise. So we call those indicators of behaviors or IOBs.
Because we’re not really looking for indicators of maybe threat or compromise all the time, but just, what are all the indicators of behavior that we’re seeing from this user account or this device and then can we build, through a series of probability is to know when somebody has become very risky, and assign a risk score to that.
Camille Morhardt: It seems to me like you’re describing protecting a compromised account and say a malicious user–maybe an insider who’s gone rogue. First of all, is that accurate? Do you approach those two breads differently?
Alan Ross: Yes, I, we do. Because if you think about, in just general terms, a compromised user account is way, way, way more common than a compromised user or an insider threat.
If you think that, okay, one in–I’m going to make numbers up–one in a hundred people get compromised in of course of the year. Maybe one in 10,000 are actually malicious insiders in the course of a year. So looking for them, you do use different techniques. To your point, where we have these sets of indicators of behavior for compromised user, hen we also have sets of indicators of behavior for insider threat and also some for sabotage. But again, those are pretty random and rare cases where an employee would actually go and commit sabotage. But I think you’re, you’re definitely on the right track with your thinking.
Camille Morhardt: Do you use, machine learning algorithms to detect these things? Are you using actual people at analysts?
Alan Ross: Uh, we use a combination. What we do is we use Forcepoint. Our own IT security group helps us label our data. They run our products and what they do is when they have a finding, they give us feedback, which essentially tells us this was a good finding, this was a false positive, or this is just noise. They can actually put comments in there. We use that to label the data. So when we have these policy roles and we can apply different machine learning techniques to them.
We’re just doing policy rules, anomaly detection and correlation type work. But when you move the data from the end point to the cloud, that’s when we can train, supervise on supervised models to do things like peer group detection to do a time series, anomaly detection. Also find like strange processes, interacting with data ex-fill channels.
And yeah, so we, it’s a combination of what’s done locally on the host versus what’s done on the backend in our cloud.
Tom Garrison: I’m sure there’s listeners right now to this podcast thinking, “Oh boy, privacy.” So do you find that users have concerns about somebody watching them or, you know, somebody, uh, looking over their shoulder?
Alan Ross: Definitely. And I think that– I’ve heard that with this type of product, half of the contract negotiations, where they used to be around the terms of price and performance and things of that nature, the privacy aspects of the contractual obligations are becoming more and more prominent. So we are very, very concerned about privacy.
In fact, when, when I arrived at Forestpoint about a year and a half ago, they said, “go build some analytics for us.” And I said, “Great, where’s the data, the customer data? And they, said, “you don’t have access to any.” And I said, “well, how am I supposed to develop analytics?”
So what we’ve done is we’ve built a data lake where we can take our customer data and we pseudo-anonymize it. And only our data protection officer has access to that key. So that data then gets placed in a way that usernames, machine names, IP addresses, all of that gets pseudo anonymized. So instead of Alan I’m X, Y, Z one two three. So that gives us the ability to train new models without us ever seeing customer data.
Camille Morhardt: Maybe this kind of goes a little bit to privacy, but I have kind of a philosophical question: do we always know bad and good in high-tech?
Alan Ross: No, no. There’s a lot of gray and I think that’s what we’re embarking on is this journey of how accurately can we detect or predict based on what we’re observing? And in which cases will we be very successful? and in which cases will we fail miserably?
Because if you look at the case of espionage and you’d think about people who are trained to do espionage. never color outside the lines. They are put in place. They follow orders. They will not diverge from what they’ve been told to do. In fact, if their job scope or responsibilities changed or anything major changes that might cause them to be detected, they will be pulled from the operation. So, I would say detecting espionage is nearly impossible, unless you’re really focused on maybe some of the data exfil or something where they start operating outside of the lines
Insider threat, again, or sabotage is also a tougher area that it’s going to be a lot harder. What you might find, though, is that you can find a lot of lower hanging fruit–of finding people who are either doing things unintentionally that could cause harm is your data leakage or, because their device in account had been compromised. And those things kind of stand out.
I think we’ll have much more success on if we looked at it as a continuum between a compromised device, all the way to a espionage kind of use case. The success will be much higher toward the left-hand side of that. And then it would go down pretty dramatically as we went to the right. But we hope to learn over time.
Tom Garrison: So Alan, who, who are the companies right now that are on the forefront of this kind of analysis, uh, of the user behaviors?
Alan Ross: That’s a great question. And I think what we’ve traditionally seen as that the financial services industries, the ones that are highly regulated and also have a lot to lose are very focused on things like insider threat and compromise or malicious insider. So they tend to adopt these sorts of things at a higher rate than say anew tech company.
It’s more the established companies that have more to lose–oil and gas industry, high revenue, lots of intellectual property.
Camille Morhardt: I’m curious, you have–or the industry I guess–puts together threat maps of behavior categories, like, exporting large quantities of data or, like you said, sending an attachment with a high file or various types of things like that. But then you also can track individual people over time with their usage pattern, right?
Alan Ross: Right.
Camille Morhardt: So I’m just wondering, do you ever build threat maps of individuals, especially in these cases where you’re describing companies with specific interest in detecting insider threats?
Alan Ross: We have not yet. And we haven’t even really approached that in a research sense, but it’s a good point. I mean, that’s something that we’ve talked about in terms of like, okay, you know, maybe we want to treat system administrators and developers differently than we treat financial and legal and HR people. Not that they all can’t do harm. But they would all do it in a different way.
There are different behaviors associated with some of the things that can be considered malicious. So I think right now we’re not doing anything that focuses on a particular user. And what we’d like to do is get into more peer group/role type identification, because once you can identify a group of peers in one company and you get a lot of time to study their behavior, it’s pretty likely that the next company you get and you identify that same peer group, they’re going to behave fairly similarly.
The notion of peer grouping also gives you some advantages that you can see if the peer strays. Or a new peer, all of a sudden appears, but doesn’t only on one vector of the peer group, you know? So maybe you see that all of these people use this Slack channel, they all email each other, they all use the SharePoint. Now there’s just somebody who joined that Slack channel, but is doing nothing else, that’s kind of weird, you know? And so that behavior may be indicative of something.
Tom Garrison: One of the things that we try to do–Camille and I–is, have a little bit of fun as part of this podcast, as well, and introduce our listeners to new things that we’ve learned over the last, short amount of time–maybe the last month or something.
And so I wonder from your standpoint, is there anything that you learn, you think’s pretty interesting that you’d want to highlight to our podcast listeners.
Alan Ross: Well, I, I recently just started to get back into CrossFit. And I’m not a young man. But I hang out over there at the gym. And when I found the other day is that I could with a barbell and weights on my back, I could step up to a 24 inch box for seven straight minutes.
Alan Ross: If you’d asked me “Alan, do you think you could step up with just your body weight for seven straight minutes. Sure. Probably. But do you think you could put this way down?” Now, to be honest, I didn’t face the clock because I didn’t want to get stared down by this clock that was moving slowly. But I’ve found that CrossFit is helping me really stretch some of my physical limits that, you know, I wouldn’t overcome by work, getting out by myself.
Tom Garrison: Oh, that’s great. That’s great. Camille, how about you?
Camille Morhardt: So, um, thoughts from the beach. I was contemplating, you know, how do you teach a Mallard duck to learn to fly? You’ve raised one, but they now need to be released to the wild. They actually don’t learn to fly by themselves unless you teach them. And it turns out that the most effective way to teach a Mallard duck, how to fly is to stand on the edge of a body of water and hurl it upwards into the air.
Alan Ross: Okay. (laughs)
Tom Garrison: (laughs) What comes to mind is how many other ways have you tried that didn’t work before he tried that one?
Camille Morhardt: I YouTube’d it. And shockingly, the first video I saw, I was like, “this is insane. This is animal cruelty!” Then I watched six or seven more on YouTube and that was the advice. And I called the Mallard farm from which I procured the Mallard ducks and they said, “well, basically just throw them into the air.” And so lo and behold, it worked.
Tom Garrison: (all laugh) Wow, good. I got to put that in the category of, I learned something just from listening to that. Um, so mine is again, I’m gonna stick with the theme of entertainment. I watched a great show and I’m pretty sure this was on Netflix. This one was called “The Social dilemma.” I think it’s a great thing, particularly if you have kids that are in sort of like about maybe middle school, all the way up through high school or college. I just think that there is a tremendous wealth of learning that ages needs to be exposed to when it comes to social networking and social media in general.
I learned that. And I sat down with my kids and watched it with them as well. And they were, they were pretty, uh, pretty taken back. So that would be it for me.
So, Allen, I, I do again, want to thank you on the podcast. I think the idea of looking at, you know, human centered behaviors around cyber activity is fascinating. I certainly learned a lot today, so thanks again for your time.
Alan Ross: Thanks for having me. Appreciate it.
Tom Garrison: And for all the listeners of the podcast, uh, you know, stay with us. We’ll be back with a new pilot cast in the next two weeks. Take care.