EP56 – Offensive Security Research, aka: Hacking
[00:00:00] Camille Morhardt: Hi, and welcome to this episode of What That Means: Offensive Research, AKA Hacking. We’re going to be talking today with Jason Fung who is Director of Offensive Security Research, as well as Academic Research Engagement at Intel. He oversees the security assurance and emerging threat research of key technologies that power Intel’s Edge Communication and Data Center products.
In addition, he leads academic and industry collaborations that advanced product security assurance best practices for the semiconductor industry. Recently, he contributed to the creation of the community-driven Hardware Common Weakness Enumeration and the industry-first Hardware Capture the Flag competitions that inspire researchers to address some of the toughest challenges in hardware security.
He is a founding member of CAPEC CWE, that hardware Common Weakness and Enumeration advisory board. And he has over two decades of industry experience in SOC architecture–System On a Chip architecture–and performance verification, automation, product security, penetration testing, consultation, research and pathfinding, engineering and risk management.
Welcome Jason.
[00:01:18] Jason Fung: Thank you, Camille. Thanks for inviting me to speak to your podcast audience.
[00:01:21] Camille Morhardt: I’m really excited to have you here, and I’m going to start off by asking you to define offensive security research. Um, I think that, I almost want to say that’s a euphemism a lot of times in the industry for hacking, because I think typically people think of hacking as only bad.
So we’ve got this term Offensive Security Research. But can you define what that is and what hacking is in just a few minutes?
[00:01:51] Jason Fung: Basically, um, we had our own product before it got hacked by the attackers outside. So this is kind of my definition about offensive security research, it’s about how can we ensure our products are being secured using an alternate way.
Right? So when you look at offensive, that opposite is defensive. So let me start off with like what people usually do when they look at security of the product is to find out, for example, what are the best practices available to help support secure product design. That is a defensive part. Also people are looking for advancement in tooling and methodology so that the developers, the verification people actually find these vulnerabilities in a effective manner and timely. And also, uh, we have people looking at how can we, uh, introduce the best in class protections and mitigations. These are all what defensive side of the security research is about.
But how about offensive? Why do we need to have offensive? So we want to put ourselves into the shoes of the hackers and ask the question, what would they do? And this is about embracing the heck and mindset, uh, and assess the product risk from their end goal and acquire insights to secure products that could have been missed.
So, Camille, do you actually like to watch action movies?
[00:03:10] Camille Morhardt: I love action movies.
[00:03:12] Jason Fung: So one of those that I really liked is about these gangsters, trying to rob a bank, which is protected by high-tech gauges. Uh, the safes are being like, uh, being hidden in the bank, right? With all these layers and layers of concrete walls. And you can drill through it. And then if you really want to overcome those safe mechanisms, they have these pin logs–only two members maybe for the whole bank actually knows about the password and you can’t really hack within the different amount of time. And they have trap doors, uh, retina scan, uh, like all those things behind. And that’s what the interesting part about those movies. How did these smart hackers go against those and break in the bank. Right. And get the valuables, the gold bars of it. Right.
So that is the part that really intrigued me and hacking right, or security hacking, is no different than that. Right. It really employs a different kind of mindset. The mindset about going against the rules, uh, going out of the box thinking, uh, so in some of those movies they may think about, “Hey, you know, what, if I want to just play by the rules, then I have to break the combination lock.
But how about, let’s try to get the fingerprints off the keypad.” Now you don’t have to get the code from two different people. Uh, “how about I blow off the hinges of the safe,” so I’m not trying to attack the strongest part. I’m looking at the weakest link. “Maybe I can drill holes up from the bottom of the building, right, rather then attacking the thick layers of concrete walls.
So you kind of go behind the blind spots of the designers and try to exploit weaknesses. So hackers are rarely similar like that. Right? We don’t want to be played by the rules. We are going for the weakest link. And this is where offensive security research is about.
Right. To see how hackers are trying to compromise a product using their rules, using their leverage. And if we can anticipate that, we can also patch them up our mental kind of blind spots and make the product to be even more secure. So to me, offensive securities research does not replace the traditional defensive research, but go hand in hand to make the product even more secure.
[00:05:22] Camille Morhardt: So why do you go, or why do companies just in general go outside of themselves to look at or get the perspective of offensive, I’m just going to say hackers. I mean, why isn’t that done internally in kind of partnership with the architecture? Why are you going outside of a company to pull in that expertise.
[00:05:44] Jason Fung: Yeah. In fact, actually, uh, we have a lot of offensive security research employees in place at the company. Right. And sometimes we also contract out with artists and as a researcher, we have our experience and our past learnings kind of shape our perspective. And by having more people coming from different perspective, it helps to kind of round out the blind spots.
So why the architecture team is not doing that directly, I think they are already doing the defensive side of it. They already coming up with the threat model, the best they can. They look at all the perspectives that they already know and try to already incorporate into the product. So having an outsider perspective, having a team help out the development team, looking at the product from a fresh angle, exploiting the use cases, figuring out that things that they may not anticipate by the designer, by the architecture, by the verification adds a lot more value.
And that’s where, when we combined forces with different security researchers from different backgrounds and each help to round out these ideas, which makes this whole effort becomes even more effective.
[00:06:51] Camille Morhardt: So I had always thought that hacker equals bad. So how is it that you’re able to work with hacker? Why would hacker work with you? I mean, a bank robber is not going to work with the banks. Tell me how you structure something like that.
[00:07:06] Jason Fung: Yeah. This is also the part that makes my job fun. Right? So I’m hacking my own product. I’ve been paid by the company to do fun things that I like, and I’m hiring the best professionals outside into my team to do the work. So there are a lot of smart guys. Why they care about security, because it is really fun. It actually exercise their curiosity, their creativity and out of the box thinking. They are very intelligent. Uh, some of them may be Jack of all trades. Some of them highly specialized in certain skills. And they are also, I would say are very careful observers.
These are the people who are likely detective. They try to break things of very small pieces and try to understand the patterns, understanding the usage model and find things that other people cannot find. And they also like learners and collaborators, right? So these are all the good guys and to them, right? They are doing the job to find flaws in product.
You can be the bad guy that may be sell your learnings to the black market, but also you can be doing all these fun things as a good guy. And I think there are certain types of researchers that we really work with very closely and they all share some common goal . Right? The goal is I want to make technologies better and safer for people to use because their parents, their artists, their grandparents, their friends, everyone use the same technology. And that’s some of those ideologies that are shared by these researchers, these, these hackers.
At the same time, of course, right. There are also some bad apples in the mix. Like I would say, they may look for their own personal fame and glory and money. And that could also be casting certain doubt against the entire population of researchers/hackers.
[00:08:50] Camille Morhardt: Well, I guess I want to drill down on that a little bit more because. Even good hackers can be interested in fame and glory. Maybe it comes in in the perspective of, um, publications. So, I mean, can you talk a little bit about what motivations or what, what I think you maybe talked about what inspires people to maybe be good hackers in terms of improving technology, but what are the kinds of rewards that they might be looking for? Um, and are they different than the rewards of people designing technology as a whole? We’ll go with the 80-20 rule, right?
[00:09:31] Jason Fung: Yeah. That’s a great question. So, uh, every day I work with a few types of, uh, researchers. Uh, some of them comes from academics. Some of them comes from the industry that are employed by companies. And then also some of them may be the freelancers they’re the bug bounty hunters.
So let me start with academics. These are the smartest people that you can find across the whole world. And they are eager to find the next innovations. They want to be the first to come up with something new and looking at security is one way to be having that fame and glory, but to them more important, it’s also about showing people about what are these innovations is. So publications, paper, having their students continue to be graduated through the PhD program, having the grants to allow them to continue to drive bigger and better kind of research is what motivate them to do all these great work and sharing that information by nature through publication is part of also their goal, right?
So sometimes we get into a little bit more about the timing of the sharing, as you can imagine, being the first to be able to publish something, uh, is very important to the academic researchers and how can companies work with them to ensure that they can still be calling out there being the first without having the whole world knows about a particular vulnerability before the company get to address the issue is the dynamics.
[00:10:54] Camille Morhardt: Okay. So I want to, I want to slow down on that point and reiterate it because I think it’s interesting. What you’re saying is in order to collaborate with academic researchers, what they’re interested in a lot in a lot of times is being first to publish a finding–a new finding or a new vulnerability or a new kind of weakness that no one else had discovered. And this is often a race.
So there may be multiple different institutions or academic research. Groups that are essentially racing toward finding things. And the only way really that, that any of them knows who found it first or who gets credit for that is, is the publication date. On the other side, you have companies maybe who are interested in making sure that they have some sort of a mitigation for whatever weakness or vulnerability the researcher discovered before letting the public know about the weakness or vulnerability.
And that’s a good intentioned thing also because that’s like, “Hey, we would rather not publish for all the world of potential bad hackers or, um, or, you know, any kind of person who’s interested in doing harm or exploiting a vulnerability. If we publish and make open to the world, what that is before we have any way to address it or mitigate it.” That’s dangerous. We’re now putting potentially many people or many millions of people, personal private data, anything at risk.
So there’s this balance between ensuring that publication is kind of fair in terms of who discovered the problem first and trying to do that in a timely manner and also industry saying, “wait a minute, can we please come up with some kind of a fix for this and make sure that that’s been implemented before the world is aware of something that’s wrong.”
How do you balance that?
[00:12:50] Jason Fung: Yeah, it is a tough problem, right. Um, because, uh, the timeliness is key for the academics and also for the industry, uh, the companies, uh, with the products being compromised. So right now, uh, we have been, uh, coming up with, uh, these, uh, vulnerability disclosure policies that really kind of strike the balance between the two. So we can work with the researchers about, “Hey, you have a report filing to the company and indicating that this product has this problem.” And then under a certain timeframe, the researchers will not disclose the information to the public, but at the same time, they are able to file that particular publications to the conferences and we work with the conferences also hand-in-hand to ensure we address the issues well, ahead of time, before the publication shows up to the public.
So that kind of handshake allow the best of both worlds–protecting the customers who are at risk and also ensuring the timeliness of the researchers getting the first right of publication. And that is something that we continue to kind of work towards.
Uh, one thing which is important to highlight about is that when we talk about hardware compare with software, software, it is easy potentially to release a patch bay. Hardware, if the problem is actually deeply buried into the underlying layer, we have maybe a few options. One, if we can solve it over a software layer, yeah we can have a work around. If we can address it underlying maybe in the firmware, we can still release a firmware patch that could be done, uh, over the air or maybe through some mechanisms, uh, supported by our equipment providers.
But then if we ends up having to be addressing in the hardware itself, which is the rare case, uh, then yeah, that timing, this becomes much harder to work with sometimes with the academic researchers. And that is the patience part of that. They have to kind of work with us, allow us to be able to protect the interests of the end customers.
[00:14:51] Camille Morhardt: And, and does industry as a whole agree on the amount of time that’s kind of allowed or, you know, how, how much time do you get, you know, and then is it different for software and hardware and does all of industry agree, let alone researchers?
[00:15:10] Jason Fung: Yeah. Uh, another great question. I don’t believe there is a common timeline that everybody would say, “great!” As you can imagine, uh, one side would say the faster, the better, the other side would say, uh, “give us more time so that we can do something more thorough and coordinate better.” And it is not just about the company itself needs to do that work. Right. But the ecosystem, the partners, maybe the OS, in the software community, the IT department who consumed those products, uh, to be able to well-test in front. So it’s like a multi layer of complexity.
But to answer the question, I believe the minimum 90 days would be required to help triage the issue, reproduce the issue, identified mitigations, making sure the mitigations are effective, uh, in different scales of the product, different usage models of the product. We lease it to the partners and the ecosystem, allow them to do the same thing. So minimum 90 days is, uh, the commonly accepted, uh, practices.
[00:16:10] Camille Morhardt: Does industry work with academia in terms of establishing or setting research trajectories? Like what is industry worried about in technology? Or is academia just picking based on their own interests?
[00:16:29] Jason Fung: Yeah. So this is actually part of my job is to help share what is important to the industry, uh, with our academic partners. So kind of going back to about seven or eight years ago, when I first started on this academic engagement, I was coming out from working on Intel products and try to secure them. And then the problem is okay, when I look at hardware, security tools and methodologies, I don’t find that many outside available in open source or in commercial solutions.
So I look deeper to see what’s the problem. And then when I talk to our academic partners, as I mentioned earlier, I really respect them. They are very intelligent and they are usually the, for one of the technology. And when I ask them in hardware security context, what are you researching on? So a lot of them actually have been spending time solving problems where the money, right, the funding is coming from and direct them to look into. And as you can imagine, our U.S. government naturally have a lot of interest to secure the supply chain, making sure that ships are not embedded with Trojan; they’re also a cryptography-related improvement, ? driving. So when you talk to our academic partners, many of them are working in these spaces. Right. Which is important. Uh, but then not the complete picture.
So that also started how I I’m involved in this journey about we have many other hardware security problems that we also have that we need academics to work on. And that journey, uh, becomes about you go to certain conferences, share with people about what the company cares about, how the industry cares about what are the gap area. And then people start have conversation with you. You give them maybe some funding so that they can work on solving those problems. And then when people see, “oh, wow, these are really interesting problems!” And more and more people kind of joined the course.
And that’s also one of the motivation, why we actually started this Hardware Common Weakness Enumeration. We want to show the world about what is the common issues that we see in day-to-day product development and where we need more help on. And one interesting, uh, encounter we had before is that, uh, we as security researcher here, we also file patents or publish paper. Right? So I remember one time that we actually publish a paper or submit that paper for a really great conference, and we thought, “oh yeah, we are solving a big problem and they should like our paper.” At the end, when we receive, uh, some, uh, reviewers’ feedback, they basically ask what makes you feel that this issue that you’re solving is the top issue? To them they haven’t heard about the issue because we failed as an industry to tell them about what are the biggest problem. And that is also motivating us to share more broadly about, “yeah, these are some of the challenges.”
So I think to answer your question is that it is like a proactive work that the industry needs to do. And then also coming from the academics, they are absorbing all these information and try to internalize it and figure out how, what my research direction is going forward look like.
[00:19:35] Camille Morhardt: What would your dream team of hackers be if you were going to assemble a team. I mean, how many people would you want and what kinds of different things would you want them to focus on? And if you want to scope that down to hardware, that’s okay. Jason’s Ocean’s 11 or whatever.
[00:19:54] Jason Fung: So I am looking at a multidisciplinary team. I really enjoy the benefit of working with folks coming from different backgrounds and perspectives and helping one another to continue to learn. And also helping this balance of absent skillsets I can add. So from a research angle, we, for example, take in, uh, some very, uh, traditional security research area and with talents coming from that point of view about, “yeah, I know how to do threat model.” “I understand the associate architecture.” “I can go deep dive and understand and find issues at hand, uh, for different classes of product.” It can be a server client IOT, FPGA, et cetera.
And then for each kind of issues, now you need specific skillset to find those problems. You may need a radio expert to be able to find out, “Hey, this radio is emitting certain signals, and then it can be captured by another one.
And what if I have a side channel happening?” So when you talk about the side channel, the radio expert may say, “you know what? I cannot only work on the RF part of it, but then if you talk about how can we apply statistical analysis to a break a crypto key material, all of the traces that we have collected, then you need another skillset, which is specialized in identifying these patterns and being able to perform another system. So you need a side channel expert in the team.
Now you talk about analyzing data, is it the right way to do is just by brute forcing it, or maybe we can apply machine learning techniques? So now you may want to introduce a machine learning expert. But all these hacking approaches may not also give us the comprehensiveness of the product coverage. So you also want to provide the sense that by the time we finished this analysis, our product has certain level of confidence. So how can we achieve that? Fussing? ? mortification? All these are very specialized skillsets.
So in my dream team, we need to have experts covering any of these aspects and then they can work together. And one thing I really care about is not just about the skill set of the individual, but also their mentality. The mentality about being passionate, being curious, and also ready to learn more new stuff and collaborate well with one another. I think these are all we’ll make a dream team come true.
[00:22:15] Camille Morhardt: That’s interesting because sometimes I think in academia and actually also in industry, we end up in silos, you know, and you get more and more specific about what you’re good at. And then you miss you. You have big gaps; whereas if you end up working together yeah. You know, it might just take a lunch together to figure out, “oh my gosh. You know, I hadn’t thought of that angle!” Yeah.
[00:22:43] Jason Fung: Exactly. Right. And that inspiration, that discussion that brainstorming, having this routine practices is really hard to come by, especially right now when we are in COVID. Right. So, one thing that we also do is we try to create these opportunities in a more intentional manner. So one trick that we have been using also is, um, we form a group or study group, uh, where we focus on a particular discipline. Let’s say, “how can we apply static analysis techniques to address hardware issues?” And then we bring in like-minded people and weekly, we try to rotate topics and have individuals to share about what they have learned. And by sharing information, we also get to critique one another’s viewpoints, learn about what are the gap area and come up with some new inspiration for us to really spend time looking at. So I think that intentional discussion, uh, will also help to uncover more of blind spots that people may have.
[00:23:41] Camille Morhardt: Well, let me ask you one thing. I think you started a capture the flag in hardware? Can you explain what that is?
[00:23:48] Jason Fung: Right. So in the journey, when we try to actually bring more awareness about what the industry problem is about academics, so the first step we chose is to speak at conference. Create a tutorial. So they have great opportunity to help kind of share the work. But then when you look about, look at, yeah, there are people attending the conferences, they get to understand some high level pictures, but then they don’t get to see the hands-on parts of it. There’s still at a distance about carrying a conversation versus carrying on that research of that discipline.
So at that time we thought about, “Hey, how can we make this a journey, the awareness building journey to be even more fun and more hands-on.” So we partner with, uh, our, uh, researcher partners in academics, uh, from Germany and from also US, and we pulled together this, uh, hardware Capture the Flag competition. Uh, we showcase all the common weaknesses we are aware of in open source SOC. The open source SOC basically contains like the regular RTL code that people like designers or verification teams will review and polish. But we embedded these bit into that big pool of a RTL code. And then we open it up for the competitors to find them. And we give them 48 hours straight non-stop actions in the conference setting and invite teams of, uh, maybe three or four joining together and find as many issues as possible.
One thing that they walked away with is that first, “oh, this is what you mean by having this issues” because they get to see them. They got to play around with them. Second is that they also understand the challenges being encountered by our verification team, because with a very short period of time that you have to verify your RTL before it got released as a product, we create that a 48-hour window that they have so many bugs inside the LTL that you need to find the more, the better they understand “I really need tooling. I really need that fantastic methodology to help me understand the RTL, find all the issues and be able to report back.” So that also brings back awareness to the attendees so that hopefully right, they will be inspired to work on the research discipline related to hardware security in a more intentional manner, relevant to the hardware industry’s problems.
[00:26:11] Camille Morhardt: Jason, this is really, really interesting conversation. I’m so glad that you’re working on this and focused on it. And I think it’s really interesting that such deep partnerships occur actually between industry and academia and what we, you know, kind of traditionally think of as hackers, but putting a different spin on it, thinking of the good side of that and how it’s helping evolve technology.
Thank you so much for joining me today and defining what is hacking and what is offensive security research.
[00:26:45] Jason Fung: Thank you, Camille.
[00:26:46] Camille Morhardt: And I also just want to point out, um, you did allude to the different kind of mindset of we’ll say hacker or offensive security research. We have another episode with Isaura Gaeta where we talk about diversity and inclusion in cyber security. And one of the key topics she addresses in that is just what you mentioned, which is a different mindset is part of diversity and inclusion. So if anybody’s interested in hearing that, um, they can check out that episode as well. Thanks again, Jason.
[00:27:19] Jason Fung: Thank you, Camille.