Tom: [00:41] Hi, and welcome back to the Cyber Security Inside Podcast. I’m your host, Tom Garrison. And as always, I’m joined with my cohost Camille Morhardt. How you doin’ Camille?
Camille: [00:50] I’m doing well this morning.
Tom: [00:53] You know, I have a, uh, a trip planned to the coast this weekend. And unfortunately not to enjoy the, uh, the Oregon coast. It’s to do work this time.
Camille: [01:04] On a boat?
Tom: [01:05] No, I wish. That’d be even better. This is a, we have a house over there and got to. You know, the Oregon coast, uh, just as, is not kind to wood. So we have to paint all the time or replace wood it’s rotting. And so anyway, it’s kind of a bummer.
Camille: [01:22] Yeah, the wind actually blows vertically upright at the coast. I’m not sure how it does that, but it can pull shingles off buildings.
Tom: [01:30] Oh yeah. Yeah. For sure. For sure. You get salt water just does really bad things, too, when it gets in places it’s not supposed to be.
Camille: [01:38] Well, this kind of leads directly into the question I had for today.
Tom: [01:42] Oh great. What is it?
Camille: [01:44] Actually, because, uh, I was thinking about how good intention companies are constantly searching for vulnerabilities in their own products. And we’ve kind of hear some things about, uh, bug bounties and various activities with external researchers and partnerships. And then there’s also internal ways to look for vulnerabilities. And I’m not so clear on those. I’m very clear about validation, which is just going to test to make sure that a product is functional, but how are companies internally actually looking for vulnerabilities?
Tom: [02:22] Yeah. You know, there’s a, I guess there’s two sort of schools of thought. One is if you have perfect information about the product that you’re trying to, to break into that’s one approach. And then with that perfect information, if you still can’t get in, then that’s, you know, that’s a pretty good sign. And then the other is, even though you’re an internal employee, you act like an external researcher. And you just look for any, anything that you can find similar to what researchers do on the outside.
Camille: [02:58] So there’s a term that’s out there for this, which is penetration testing, which considering the house at the coast being beaten up, um, it looks like the water is essentially trying to find any way in, which made me, made me think about that. But I’d like to learn more about how that works.
Tom: [03:17] Yeah, I think there’s well there and there’s a whole bunch of, uh, I would say science really around this. So I think this is a great topic for today. Let’s do it.
Tom G: [03:34] Our guest today is Moshe Zioni. He’s a Director of Threat Research at Akamai Technologies and is listed as one of 27 Influential Penetration Testers in 2020 by Peer List. He has been researching security for over 20 years in multiple industries, specializing in penetration testing, detection algorithms, and incident response. He’s also a constant contributor to the hacking community and has been a co-founder of the Shabbat Con Security Conference for the past six years. So welcome to the podcast.
Moshe: [04:04] Thank you. Thank you for having me.
Tom G: [04:06] So that’s quite a background that you’ve got there. I wonder if you could just spend just a little bit of time here and, and talk through your background.
Moshe: [04:16] OK, sure. At school I didn’t really get the, uh, the same intellectual and they didn’t scratch the itch that I had. So this was the first I would say the reason for me as a kid to get into this. And then it just was an everlasting journey of things, of interest [00:1:00] to look at how things work, which is the basic, most fundamental question of everything we do in research, how things works.
And then the next step is how things doesn’t go so well when you try to break it. It’s not the purpose of breaking it, but more the purpose of let’s see what, what will happen if we’ll do something that the machine doesn’t expect to have as an input.
Tom: [04:56] Right.
Moshe: [04:57] Uh, back in the nineties, uh, which we are talking about the late nineties, it wasn’t really, for me, it wasn’t really shown as a career path–more as more as a hobby, something I will do by night, not by day. And then when I joined the military, I got to the realization, there is something to do it by to do with it by day.
Next step, after the military, after three years of service, I’ve started as a consultant and a consultancy firm to called COMSEC in Israel. Uh I’ve I dabbled into so many niches, uh, within, within the hacking and penetration testing, especially in incident response that really, I think brought me as a generalist to the field. And until now I find myself every now and again, find something else to, to break apart. But again, as I said, the breaking apart is not the goal. The goal is to see what will happen in ? input. And the next step is really a Holy Grail is how can we defend against those kinds of attackers? And that’s exactly what I’m doing together with the guys at the Akamai today.
Tom G: [06:04] Now, you know, one of the things that really stood out about that background was that the curiosity, because it is certainly something that I’ve seen in the folks that I interact with within the security industry is this sort of insatiable curiosity.
And the other piece that you mentioned, and I, and I’m not sure if all the podcast listeners have a understanding and respect for it, is that unlike validation—it’s been validated–that basically means the product is working the way you expect it to work. So you test you test things that are sort of things that the machine is supposed to do.
In the security side, it’s almost the opposite. You do things that are expressly not expected. And in that sense, there’s an infinite number of things you can do. You look to see what happens. And so you need somebody who’s super curious that’s also deeply technical, but has this curiosity that just never ceases. And we somewhat, you know, we, we call that penetration testing, but, uh, you’re just looking to try to, you know, kind of beat on whatever the device is–whether it’s a PC or a internet of thing device or something–until you just see, does it just die and it’s not interesting? or does it die in a sort of unexpected way that sort of uncover something that you didn’t expect?
Moshe: [07:26] That’s very true.
Camille: [07:28] Is there a crossover? Do you ever have the same people doing the validation as the pen testing? Um, they seem like opposites.
Moshe: [07:36] You are very right that it’s not the same person. I won’t say opposites because the mindset for a QA person, a good QA person will ever be what will happen if I’ll do that? I think the distinction is that the QA is still limited to the finite set of actions that he’s supposed to do, or maybe it can be achieved with no, I would say adversarial intent into that.
But once you get to penetration testing or adversarial research, you get to the point that you ask yourself, not just what will happen? how can I crash the system? That’s not the point as Tom said, the point is once you find something funky, what is the effect of this error? Is that something that you can leverage? Is there any kind of impact as a technical person to understand what is the impact and how can it be leveraged or technically to the technical term for that is “exploited.”
Tom G: [08:29] And so do you do this type of work–the penetration testing–as part of a [00:05:00] team, or is this inherently sort of individual type work?
Moshe: [08:39] That depends. Again, a very good question, especially that I see an evolution there throughout the years. In my experience, first of all, it was very isolated work. Uh, you couldn’t really trust anyone on the internet as, as much as you can today. And remember that back of the day, it was even dangerous to, to share those kinds of interests together because it was considered to some extent a criminal activity, even though we didn’t see ourselves as criminals, of course, and we never, never got the intention of doing any harm. Ethically, I would say we were very, very stern on going the, the most ethical way that we can do with it. So everything that’s really even touched the line–not even crossed it–we didn’t touch it.
I remember, even in 2005, while I ended up going some to, to look for work after the military, I was afraid to say that I tested some of the things that I talked about and have practical knowledge about it, uh, because it was dangerous. You couldn’t know really if, uh, this company will sue you for any kind of reason.
So to your question back to your question back then, it was a very isolated work, maybe two people more, uh, in the whole world knew exactly what I’m doing, or at least to some extent, because we were partners in, uh, of, uh, of, of this kind of knowledge base and try to enrich each other.
Once you got to this kind of evolution in the 2000s, you get some companies doing that. So consultancy and other firms were doing that. Then even some in-house like, uh, uh, big retailers and banks were starting to show up as groups of people to do penetration testing for them. But it really depends on how big is the target. The broader term, maybe the red teaming term, the, the scope of it becomes a bit more malleable, uh, but the rules of engagement are a bit different in terms of customer relations and how do they, uh, what do they expect? Most importantly, they expect to be attacked and by that there they are mainly using their prowess in order to have the Security Operation Center or anyone like that, to actually try to detect the attack while it, while it’s happening, while in penetration testing is mainly not the thing to, to, uh, to worry about. because you want to test the system individually, uh, without, uh, third parties or, uh, that are involved in terms of, uh, security parameter, or maybe firewalls involved. Sometimes that, that, that will be even put off for the penetration tester in order to do his or her work properly.
Camille: [11:09] You were talking about earlier that you maybe did penetration testing on for companies or organizations who didn’t know you were doing it, back in the day (laughs). Um, and then you always remained completely ethical–maybe you told them, maybe you didn’t tell them every time. But is there, is there any kind of excitement over or do you gasp sort of in realizing some giant hole that could be exploited or recognizing something people missed?
Moshe: [11:40]] Uh, it’s a mixed feeling. Uh, it’s a mixed feeling because I give you an example. Back in the day, my first encounters with, uh, with, uh, in the weld vulnerability was for a big university in Israel. There is an excitement there to know something that no one else does or to find something or to outsmart someone in some cases. I won’t have the same feeling today, by the way, because as a kid is something else. I would say. You are doing, you are doing it purely for the intellectual work or worth of it. So you get the excitement from that, but on the flip side, you are really anxious, first of all, what will happen if someone else will discover? will they be able to , uh, the attacker will be able to do something much worse than what you are seeing here?
And if you, and that’s the second part of the, of the anxiousness is that if you contact this university–and that’s what I did–and you tell them that you found this kind of a liability, will they get mad or just fix the problem and thank you or something in between? And eventually what happened there is that I contact them anonymously and it appeared to be fixed after a few weeks. I’m not sure if they got my message. I’m not sure if I affected that, but it felt great to see that fixed
Tom G: [12:58] So, you know, we we’ve talked a little bit about the difference, uh, or that there was a difference between penetration testing and things like Red Teams. I wonder for penetration testing specifically, when you work with a company, do you get information from that company about their product, their service, their device, whatever it is and you start with that? or do you start with a clean sheet, like a hacker would have?
Moshe: [13:27] Okay. So, so, uh, you’ve touched an important point on scoping, uh, while scoping, you are asking yourself the same question. Do I know anything or is the threat model I’m trying to mimic here is an insider or someone with some knowledge of the intelligence of these machines or these servers? or am I a non-persistent attacker, which means that I’m, I have a blank sheet, as you said, and just try to find my way into the perimeter, whatever it is? The technical terms with, uh, for that will be a white box versus black box and maybe gray box in between.
The white box approach is something that the customer will tell me any, anything I need in terms of technology or what is behind the scenes of the back end, the front end? Is he seeing something that I’m trying out? Is it signaling something at the back or not? and will give me a full-blown map of the website or anything else I’m testing.
The other end of the scale is the black box, which means nothing is being shared, no one gives me anything or any kind of permissions, even some cases where that I didn’t really get a, an IP even, or an, [r a domain name, just hack me. Now, “what is ‘me’? Let me have something.” “No, you don’t get even any kind of footprint to the server. Let’s start with from the beginning.”
So the question is what is important for the customer to know? and what is the maturity level of their security and a security posture? Do they have a new product that you want to test?“ And, excuse me to taking a side note here. It’s also important to say that the penetration testing costs and it varies, uh, due to mileage. If you have two weeks or one month, uh, of men time to do this work, the question is how, how effectful it will be, uh, for a team or a single person to do this penetration testing work for a week or two, which let’s remember that a real attacker will not have this limitation. So we are trying to mimic this kind of threat model. And we are trying to understand with the customer, what is the threat model for you?
If, if that’s the first time that you are doing any kind of penetration testing, the general recommendation will be go with a big white box because you want to have a 360 of your systems and you don’t want to have just a shallow report on what can be seen from the outside, from someone that spends two weeks on your website with no real intention or no real realization of what he’s looking for.
Tom G: [15:54] Do you employ social engineering when you’re penetration testing? So do you do phishing attacks or something on the employees to try to gather information? Is that, is that typically part of when you’re trying to, you know, break into a service or break into a product? How far do you go?
Moshe: [16:12] Basically once someone is talking about penetration testing, that’s out of the question in terms of rules of engagement that’s part of the things that you can’t do. You can’t do a denial of service, which are attacks that are trying to crash down those systems, for example, especially production systems. That’s a very dangerous for a penetration testing to do. There are some other services that can, can, uh, by the way, uh, help you assess those kinds of risks.
The other thing that you also mentioned–phishing attacks or social engineering attacks–are also out of the question to involve any employees of the companies is also part of the question for penetration testing. Now, Red Teaming, that’s where Red Teaming starts to blur those lines. And intentionally some red teaming are also involving either a physical attacks, meaning someone that gets into the building or trying to social engineer their way through phone and to get some passwords, maybe even to try to do some, um, some phishing attacks and even exploitation in terms of malware. But those are the extreme cases of Red Teaming. And that can be done. The question is what is the perimeter? What is the scope and what is the rules of engagement that you need to be very specific.
But to be honest, basically, once you want to do that, let’s say you want to, to test your employees, you want to test how security aware are they. Uh, so you are focusing your testing, as a customer, you are trying to scope down and to try to assess their awareness, you are, you are setting out a phishing campaign or you’re setting out a, uh, a pseudo malware campaign, of course, that you are, you will be very, very cautious not to compromise your own machines, uh, because of recklessness of the tester.
This is something that is usually either part of Red Teaming and can be scoped in or a very specific task. As a consultant, I’ve done that a few tens of times in order to assess and evaluate but it’s much less popular than, than dealing with machines. And people are less eager to, uh, to engage in those.
Tom G: [18:17] So earlier we mentioned just briefly around bug bounty programs. And I wonder if you can spend a minute or two and just describe what a bug bounty program is? and then if there are any sort of either best known methods or companies that do it really well, if you could share some of those insights?
Moshe: [18:36] Okay. Uh, yeah, sure. Uh, bug bounty programs are a way to institutionalize the, uh, bug reporting or vulnerability reporting towards customer and to have some kind of mediation between hackers that find those vulnerabilities to the third parties or first parties that are being quote unquote, “attacked” by those vulnerabilities.
So let’s say I’m, I’m scanning a website or, um, I’m bumping into a vulnerability. I see some error on the website that really, uh, uh, really makes me think that something is broken and then I can at least try to validate that. Once I have some kind of a validation I should notify ethically I should notify the website owner that he or she will fix this, uh, error or at least try to assess what will be the impact and if it’s a, if it’s something important, let’s fix that.
Uh, so nowadays once we have bug bounties, the mediation is done, so the hackers identity, are kept, uh, anonymous by the mediation company. Uh, and the other hand is that they have the rules of engagement written down and fleshed out very, very particularly. And maybe the last point is that the company may offer compensation of any kind, if it’s by, um, real dollars or if it’s by recognition sometimes, um, uh, or even by swag, some companies are, are, uh, paying with t-shirts.
Now there are more and more hackers are turning that as a career, meaning that we do see now that the top 10 or top 100 of those, uh, bad bounty programs are really making a living out of it. Um, most prominent examples of bug bounty programs. Maybe I will mention two without any, um, particular order is the BugCrowd and HackerOne. They are hosting, um, most of the bug bounty programs worldwide and with very big companies starting from uh, Facebook, which actually Facebook just dropped out of HackerOne and started their own, uh, bug bounty program, which is the biggest in the world.
And as I said, the two examples of HackerOne and BugCrowd, uh, are really respected by the community as well because they make us being heard and being heard ethically– what is called “responsible disclosure.” Once you want to disclose something you need to disclose it with the full intention of doing right and to see that it’s being fixed and to help the company doing that.
And also what also makes me very happy about that, that we see more and more companies joining those kinds of circles. So we see the maturity of companies realizing that “this community can really help us. And we want to reach out. We want to be those that say, ‘Hey, it’s not only okay to test us, but, but so we’ll play nicely and we’ll play with you in order to fix that and will we want to held accountable as well.’”
So think of at the end of 2020 HackerOne releases their report and they say that the companies that really got into that really got a better security posture at the end.
Camille: [21:47] Is there any risk that you call attention to yourself by having a bug bounty program or joining the bug bounty program that maybe you were under the radar before, but now a bunch of people are interested in finding your vulnerabilities?
Moshe: [22:02] I would say not really. Um, and again, that’s a subjective, subjective opinion. Of course, I would say that if someone wants to harm you, you, you and they have the intention to do so, they don’t need the excuse to find you under a bug bounty. And maybe even the opposite. If you are on a bug bounty program, you are mature enough in terms of security posture to say, “I know of my basic bugs and I’m fixing them. Please help me find the really nasty ones.” It’s something that really signals what kind of company are you in terms of security.
Tom G: [22:41] Before we let you go, we have a fun segment that we like to, uh, share interesting tidbits about something that you may have come across and that you think might be interesting for the, for the listener. So let me kick off what I mean by a fun fact. I recently decided that, uh, I’m eating way too much candy. And, uh, the COVID-19 has turned into more than 19 pounds since I’ve been able to get out and get around. Um, and so what I came across was one of my favorite candies is M&Ms. And what I didn’t know was that M&M’s actually stand for something, uh, and. Uh, it stands for Mars and Murrie, M-U-R-R-I-E. So I’ve heard about Mars before, like Mars Bars and, you know, I’ve heard in other connotations, the name Mars, but, uh, I had never heard of the name Murrie before. And so there you go.
Camille: [23:48] What I’m wondering is how they allocate the colors within an M&M package, because I get like the King sized M&M package and I have two kids. And so sometimes just to keep it easy I say, “okay, what color do you want?” And then each one I’ll pick a color. And I realized this is not even at all.
Tom G: [24:08] Yeah, I would never introduce the, “you get to choose what color.” It’s “you get to choose how many you get, whatever color it is. That’s the lottery.”
Camille: [24:16] (laughs) I had to back off on that one pretty quick.
Tom G: [24:19] So, Camille, how about, how about you?
Camille: [24:22] All right. So I’ve been thinking about, um, spending a little time at the beach. I look up at the sky and I’ve noticed that all the birds are really expert fliers, and we have, you know, the wind speeds come in sometimes at 60 mile an hour gusts. And like the seagulls are just fantastic. I mean, they, they can just play in that kind of a gust actually. And then I’ve noticed that cormorants are horrible fliers and they’re always heading to the ocean and I can’t understand. So finally I looked it up and discovered that though, they look like they can barely fly, they swim underwater really well. And I had heard that, but what I didn’t know is they can dive up to 45 meters or 150 feet under water to catch fish. So I thought that was cool.
Tom G: [25:08] Yeah. Uh, cormorants are amazing swimmers and, uh, I do a lot of fishing and cormorants are like the enemy cause they, uh, they eat lots and lots of baby fish. So, uh, yeah, there’s no, there’s very little love lost between cormorants and fishermen, but, uh, but yeah, they are incredible, incredible swimmers. All right, Moshe. It’s your turn now.
Moshe: [25:32] So interesting facts I saw. My microwave just got broke. Uh, it didn’t explode, but, uh, it, it doesn’t work anymore. Uh, so I needed to retire it and to buy a new one. And that made, made me of course click a bunch of links on Wikipedia. And one of those was the inventor of microwave–which was a bloke named, um, Percy Spencer. And I found out that he got as a bonus from his company to inventing microwave, um, a $2 bonus, which is an amazing figure for, in my opinion.
Tom G: [26:07] Wow. Two whole dollars. Yeah. That’s great. You kind of makes you wonder, like how did he come up with it? Did he walk in front of something and realize, “dang, I’m getting hot. If I stand right here.” You know, well, how did he radiate himself to figure out that he could cook meat with a microwave. So Moshe, thank you so much for spending the time today. It’s been a great conversation, you know, learning more about penetration testing and bug bounty programs. I think it was very insightful. So thanks for spending the time
Moshe: [26:38] I thank you for having me. It was a, it was a pleasure.