Skip to content
InTechnology Podcast

What a Former Hacker Brings to Her Canonical CISO Role (196)

In this episode of InTechnology, Camille gets into ethical hacking and security with Stephanie Domas, CISO at Canonical. The conversation covers ethical hacking and Stephanie’s recent book, how companies can shift their strategies for security and for managing teams of ethical hackers, as well as Stephanie’s outlook on technologies like AI and open source for security.

Read Stephanie’s book x86 Software Reverse-Engineering, Cracking, and Counter-Measures here.

To find the transcription of this podcast, scroll to the bottom of the page.

To find more episodes of InTechnology, visit our homepage. To read more about cybersecurity, sustainability, and technology topics, visit our blog.

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Follow our host Camille @morhardt.

Learn more about Intel Cybersecurity and the Intel Compute Life Cycle (CLA).

The Why and How of Ethical Hacking

Stephanie begins by discussing her and her husband’s book, x86 Software Reverse-Engineering, Cracking, and Counter-Measures. She explains the goal of the book was to help people curious about security and software better understand how software and computers work and how to use that knowledge to reverse-engineer software. Ultimately, they wanted to take the theory behind how software works and put it into practice. Much of this expertise comes from Stephanie’s own career as an ethical hacker. She tells Camille how the goal of ethical hacking is to find security vulnerabilities before malicious actors do. This is done by working within a system a hacker has been given permission to work in and is in a safe state, such as a lab setting. Then, the ethical hacker or security researcher will work with the manufacturer or entity that has the capability to close the identified security holes before they get exploited for real. Stephanie points to the success of ethical hacking with the National Vulnerability Database (NVD) and its many preemptively discovered vulnerabilities compared to the Known Exploited Vulnerabilities (KEV) list.

Shifting Security Strategies and Team Management

As companies grow, their perspective on security also needs to change, Stephanie explains. She mentions the “shift left” perspective, which means being more proactive will lead to being less reactive. However, she notes how companies often start only reactionary when it comes to security. There eventually comes a point where it’s unsustainable to be only reactive to exploited vulnerabilities, and that’s when the shift to proactive begins. The hard part about a proactive security strategy is that it can at times be disruptive to both customers and developers. Stephanie suggests at first taking smaller steps that are the least impactful to developer workflows but that the process will become disruptive no matter what. At the same time, the benefits of proactive security include empowering customers to make informed security decisions and forcing function for development teams to see how people use their product.

As for managing teams of security researchers or ethical hackers, Stephanie says their curiosity should be encouraged, not hindered. This means putting guardrails in place where necessary but also not imposing too many specific metrics. She believes encouraging researchers’ curiosity leads to more innovative results.

A CISO’s Take on AI, Confidential Computing, and Open Source

Camille asks Stephanie for her take on new technologies and their impact on security. When it comes to AI, she touches on emerging AI security regulations and how they are guiding developers in writing secure software. Stephanie also explains how AI is already built into many security tools like detection engines, threat monitoring engines, and compliance tools. She also praises confidential computing and the ability to have hardware-backed encryption of data in use. When it comes to open source, Stephanie details how unsecured open source first seemed decades ago and the transition to recent developments in enterprise-ready open source. However, she also emphasizes the need for security documentation and security hardening guidelines, which are still often lacking in software today.

Stephanie Domas, CISO at Canonical

Stephanie Domas ethical hacking open source Canonical
Stephanie Domas and her husband Christopher Domas are authors of the book x86 Software Reverse-Engineering, Cracking, and Counter-Measures. She has been Chief Information Security Officer at Canonical since 2023, where she leads the company to be a top-trusted computational partner in the open source space. Some of Stephanie’s previous roles include Chief Security Technology Strategist and Senior Director of Security Technology at Intel, Executive Vice President and CTO at MedSec, and Founder and Business Line Manager of DeviceSecure Services at Battelle. She is also currently on the Technical Advisory Board for MedSec, a USA Review Board Member at Black Hat, and an Official Member of the Forbes Technology Council. Stephanie has a degree from The Ohio State University in electrical and computer engineering with a focus on microprocessors.

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

Stephanie Domas  00:11

As you become aware of a security problem, you fix it; a customer has a security request, you patch it, right. So you become very reactionary. At a certain point, there’s so much pain by being reactionary, that you decide, “You know what I need to start being proactive because this reactionary is not sustainable.”

Camille Morhardt  00:28

Hi, welcome to InTechnology podcast. I’m your host, Camille Morhardt. And today we are going to talk with the CISO–Chief Information Security Office–of Canonical, which produces Ubuntu. Welcome to the podcast, Stephanie Domas.

Stephanie Domas  00:43

Thank you. I’m very excited to be here.

Camille Morhardt  00:45

So I know you from your prior career at Intel as Chief Security Technology Strategist. Before that, you built a couple of businesses that helped protect embedded systems like in health care. And before that, you spent about ten years with defense contractors doing defensive research or ethical research, which I think the rest of us would know commonly as “hacking.” Also, congratulations, because you and your husband just published a book. And we have the link below. This is on x86, reverse engineering, essentially, how to hack most of the compute systems that exist in the world.  What’s the title of the book?

Stephanie Domas  01:25

So the title of the book is x86 Software: Reverse Engineering, Cracking and Countermeasures.

Camille Morhardt  01:30

And the other thing that I think is very interesting is not only did you write this book with your husband recently and switch to become the first Chief Information Security Officer at Canonical, but also you guys live on a farm, correct?

Stephanie Domas  01:47

Yeah, it’s one of those weird dichotomies where your careers are very techie. But actually, when we’re done, we both like to walk away from the computer and be outside. And so yeah, we love living out on a farm. We have no neighbors; my horses in the backyard with my donkeys, there’s two of them, and a sheep.

Camille Morhardt  02:08

And you have four kids, too.

Stephanie Domas  02:10

We do. Yes, we were blessed with four kids, which means there’s between the farm work and the kids and our jobs, there is never a relaxing or quiet moments, but I love every second of it.

Camille Morhardt  02:22

Very cool. So what is x86?  And what does reverse engineering of it entail?

Stephanie Domas  02:28

You know, our goal with the book was essentially to take curious security or software-minded people, and peek behind the curtain of how software and computers actually work, and then use that knowledge to crack software. So x86 is the most prolific architecture across laptops and servers. Learning how an x86 computer works is not only powerful from a reverse engineering/cracking perspective, but it also makes people stronger developers, it helps them understand the defense better, because they actually understand what’s happening. They can understand code optimizations, efficiency, how you debug how compilers affect both the you know, optimizations, but also the security of your system; and how ultimately, it also comes down to chip selection, and what you want to do with that architecture.

Our goal was also not just to speak theory, but really turn theory into practice. So the book also has an online companion website that has about a dozen different labs that will each introduce him not only to cracking capabilities and techniques, but our goal is that each of the labs also walks them through industry-used tools, right, so you’re gonna get hands on experience with a lot of the different tools out there–all of which are free–to really not just learn the theory, not just learn the techniques, but also learn how to use the best in class tools.

We believe also, it’s not just for the reverse engineers and crackers out there, so we also then circle it back to defensive techniques around how these techniques can either be either eliminated entirely or just making it more difficult.  Because it’s always, as we know, in the security space, It’s an arms race, right? There’s always the attack and defense and the defense gets a little better and the attack figures out a way around it.

And then I’ve always found the legal landscape of security interesting, as well. And while neither of us are lawyers, right, we have spent some time trying to understand security in regulation and security research and regulation. So we also equip our readers with some sections on helping them understand what’s fair use security research means based on our interpretation. So they understand if they’re trying to do this for fun, right? Where the line in the sand essentially is.

Camille Morhardt  04:45

So you’ve essentially written a manual on how to hack the world’s most prolific compute architecture?

Stephanie Domas  04:52

Correct, at least the software in it. There’s the CPU itself, right? And so our book, while teaching you how the x86 architecture works, isn’t really about the CPU or below the operating system, right? So it’s x86 for the purposes of learning how to manipulate at the software level.

Camille Morhardt  05:08

Mm-hmm.  So you started off your career as what they refer to as an “ethical hacker.” And that, I suppose, it’s one person’s freedom fighter is another person’s rebel.  It sort of depends on your allegiances and where they lie. So explain why you qualified yourself as an ethical hacker and what kinds of things you were doing.

Stephanie Domas  05:30

So traditionally, ethical hackers, their goal is to understand the tools and techniques the nefarious or malicious hackers are doing by doing those specifically for the purpose of driving good–in the sense that yes, we are going to be attacking a system, but typically, part of that ethical engagement is you’re only researching or attacking systems that you have been given permission to do so are in a safe state. You know, you mentioned that I have done some, some research in the medical device space, right? Think of only ever researching a medical device that is not attached to the patient, right? It’s in a research setting; it’s in a lab setting. So ethical hacking is about rules of engagement, that you know, it’s a safe setting, that you know that there isn’t going to be adverse effects to the results of the testing that thing, right; you’re not trying to attack an autonomous vehicle, right while it’s driving. But then what you do with the results of it is the other piece of that puzzle.

An ethical hacker’s goal, right, is to try and drive good by finding things before the bad guys do and bad is always a perspective, right? So understand, it depends how you’re thinking of it. But in general, my goal–and most ethical researchers’ goals–is for us to find something in safe conditions, and then either work with or disclose to that manufacturer or somebody who has an ability to drive impact or change to essentially close those security loopholes before they’re used for something more.

When you look at things like the National Vulnerability Database, the NVD, a tremendous number of those vulnerabilities are from research settings, right? They’re vulnerabilities that were preemptively or proactively found by security researchers and ethical security hackers, in an effort to essentially reduce that attack surface before bad guys can find them. The Known Exploited Vulnerabilities, the KEV,  is a different database. And so if you look at the KEV versus the NVD, right, the number of known exploited vulnerabilities is a small subset of the known vulnerabilities in the space. And I think that comes from those ethical security hackers proactively trying to find things, you disclose them, you share them with the world, so people can make informed decisions increase their security posture, but only a small subset of those then become maliciously used, or were found because they were being maliciously used.

Camille Morhardt  07:50

So now, just to jump back to present, it’s only been like six months or so that you’ve been Chief Information Security Officer at Canonical.  It’s the first one that they’ve had. So they created the job and hired you. Or vice versa, you can tell us which.  And you know, first of all, tell us which did they hire you? And you said you got to do this? Or? Or did they make a position and then you applied?

Stephanie Domas  08:42

No, they were actively looking for a CISO. But they were looking for something very specific. There’s a lot of different types of CISOs out there. And so they were looking for like a very hands-on technical one that could really dive into like the bits and bytes and the products and actually get down in and write code and all sorts of fun stuff. And suddenly they had actively been looking. And then it just happened to be a really good fit once we started talking.

Camille Morhardt  08:43

And so what is it like becoming CISO of a company with how many people are working in Canonical?

Stephanie Domas   8:51

We’re at 11-hundred people.

Camille Morhardt  8:52

Okay, and it’s, you’re all over the world, global footprint, right?

Stephanie Domas  08:55

Yes, that’s one of the most interesting aspects of it is Canonical from day one, right? This is our 20th anniversary. From day one, right? It’s been a fully-remote company with this idea of, we just want smart people wherever they are anywhere in the world. But from a security perspective, one of the interesting things is those 1,100 employees are across 72 different countries. It’s an immensely distributed workforce.

Camille Morhardt  09:21

Are they bringing their own computers and mobile devices?

Stephanie Domas  09:26

They are currently that’s actually something that I’m working on changing, though. So we will be moving to corporately managed systems.

Camille Morhardt  09:33

And is that something that happens right around the 1,000 employee mark? Or is that not at all based on size, or is that more like history of expansion and scale?

Stephanie Domas  9:42

Yeah, at least from my perspective, it’s loosely related to size. I think it’s also just related to success and impact. So, you know, when you think of smaller companies, you typically have like one product; you’re not as interesting to attackers, you have much smaller real estate. Not that it’s easy, but it’s just a more straightforward game. As you start to get into the medium size, arguably, you’d say you grew to medium size, because you’ve proven, right, you’ve got an interesting product, so you’re now expanding the portfolio, you’re probably expanding to different countries, you’re expanding to different markets. And so all of that just adds immensely to the complexity. So the increased number of employees is a piece of that. But when you get to that certain level of success, the complexity just explodes.

They also start to sell in more countries, which means your regulatory landscape and the security requirements also changes and your interest from an attacker changes. And so I actually think it’s super interesting to be at that cusp, where the company has such a broad impact across all of compute. I mean, it’s actually pretty incredible what 1,000 people have been able to accomplish. And I will also say, right, and good part of that is because we have a lot of engagement from the community, right? It’s, it is open source, so that the thousand of us that Canonical are a fraction of the contributors to the products. but we have almost 30 different products, right? Ubuntu is the one we’re most popular and known for, but they’re all over the world.

Camille Morhardt  11:12

I’m interested in this precipice sort of companies, like you say, where you’re not small anymore, and you haven’t gone completely huge. So for companies who are there, what percent is preventative? What percent do you put resources towards mitigation? How are you balancing that kind of thing? Or how do you think that sort of thing is often balanced in companies at the size? Where should the emphasis be?

Stephanie Domas  11:36

I know people use this phrase a lot, but the “shift left” perspective, right, the more you can be proactive, the less you will have to be reactive. What tends to organically happen, though, is you start with the reactive, right? I think there is this evolution where that precipice is when you start to have so much pressure on the reactive that you realize you need to start being more proactive. As you become aware of a security problem, you fix it, you patch it, you become aware of a new problem, you patch it, a customer has a security press, you patch it, right. So you become very reactionary. At a certain point, there’s so much pain by being reactionary, that you decide, “You know what I need to start being proactive because this reactionary is not sustainable. If I can put effort into being proactive, I won’t have to be so reactive.”

The proactive stuff is a lot harder because it changes everything, right? You can be reactive security patching without having to upset your developers at all, right? Your roadmap stays fine, right? You just react.  When you’re trying to switch that proactive and suddenly, you’re upsetting the development process. And you’re forcing new tools and saying “you can’t release if this tool comes back with a higher critical. You can’t release.” That is incredibly disruptive. Customers always want good security, right? But they want their feature functionality, too, and so when you say “that feature is getting delayed three months, because I have to do this thing that you will be transparent to you.” that is a harder sell. So everyone should try to strive to put as much resources into proactive as they can. But it is also much harder, it is easier to throw resources at reactive.

Camille Morhardt  13:17

Is there something that a lot of companies can do that doesn’t cause such a strain? I imagine it’s very tops down if you’re going to make hard gates and whatnot with product; that’s pushing timelines, that’s a big pivot.  Is there something companies can do kind of in the interim to help?

Stephanie Domas  13:34

Some of the easiest stuff to do is the things that try to balance that impact to developers, right. So when you can say “I just need you to add this tool to your CI pipeline,” that’s lower impact than saying, “I need you to go and document all of the functions and your input assumptions.” But there is some balance there where there’s only so many of those, right? You can throw in static code analysis in your CI CD pipeline, right? And that’s great. And that moves the needle meaningfully. There’s only so many things that can be caught at that level, though.

And tech people really don’t like when you say this, but like security documentation is so impactful. And it is really not very fun for people to write, but documenting security assumptions and security controls and hardening guidelines is so impactful because, one, it empowers your customers to make informed security decisions, but you will be surprised how that is a forcing function for your development teams to realize like, “hey, you know, maybe we should have added this option? Or you know, it actually, that doesn’t really make sense or—”   It’s a forcing function for you to think about, like, “how would people actually use this? How would they actually lock it down and be secure.” And if you can’t write a cohesive story to empower your customers to do that, like, that’s actually really powerful. So you have the things that are the least impact to developer workflows, that you run out of those very quickly and suddenly becomes very disruptive.

Camille Morhardt  15:03

What is the best way to manage or help a team of researchers, aka hackers, thrive? Like if you’re a company, and you’re trying to bring them in, or even, even if they don’t work in your organization, but you want to work with them, so that they can help you. How do you do that?

Stephanie Domas  15:23

Yeah, honestly, sometimes it’s as simple as you know, if you’re hiring a contractor, yes, there has to be specifically scope work; you can’t like have this completely open ended. But if it’s bringing people inside of your organization, and really just wanting them to be successful at finding things, most security researchers are tinkerers at heart; they are just super, super curious people. And they will do the best work when you let them follow the threads they find, right.  They will follow the things they think are interesting. And so not putting so many rules and restrictions, like “everybody has to find five vulnerabilities every quarter.” A lot of the metrics that I see people try to put on security researchers can actually have the opposite effect of trying to get them to instead meet their metrics, instead of recognizing that at the end of the day, these are just simply super, super curious people. And they’re going to do their best work when you just empower them to be really curious. Just put some guardrails in place where you want them to at least kind of stay in the right area. And that, that also extends past like, the immediate thing they’re looking at right? Understanding that curiosity isn’t just in one thing, right? They may be curious about adjacent products, they may be curious about some of the supply chain, right; that curiosity, like empowers them to pull those threads, because it will usually come back with finding something really interesting. The really best security researchers in the world that I’ve met, the reason they leave organizations is they try to control them too much.

Camille Morhardt  16:54

How are you approaching AI? How are companies dealing with this right now?

Stephanie Domas  16:59

Yes, that is immensely complex. So there’s a lot of the big countries are coming out with AI security regulations. The EU Cyber Resiliency Act has some AI cybersecurity in it, there is an EU AI Act, right? That also has some cybersecurity inside of it. There’s US-based ones and there’s Canada-based one. So it’ll take years for those regulations to likely manifest into something that the ISO level that is potentially consistent. The technology is moving so fast, that it’s just hard for me to keep up with it. Right? I mean, just the AI-based products that Canonical produces. It’s still from our perspective, it’s software. Right? So how do we write the secure software? And is it resilient? And is there patch management inside of it like those using an AI use cases? Right, then they have to actually really think about those security regulations around AI.

Camille Morhardt  17:49

Are you using AI to actually help protect?  Are you using it defensively?

Stephanie Domas  17:57

We are in the sense that it is built into a lot of the common security tools out there. So we are using it as like a force multiplier, right, in detection engines and threat monitoring engines. And so it is under the hood inside of a bunch of these tools. I think you’d be hard pressed to find a security tool that doesn’t have AI in it and I mean, even the compliance tools that we’re using, right, you can ask it a question about like, “what was the last time this product was PEN tested?” Right? It’s using an AI to scan our documents to tell us like that was the last time that product was tested, right? So even our documentation stuff has AI power in it. So we are leveraging it right? It is a really impressive force multiplier that we’re leveraging inside of a bunch of our tool suite.

Camille Morhardt  18:40

What do you think are some of the technologies that are coming out now or kind of growing, that will have a tremendous impact in security?

Stephanie Domas  18:50

Obviously, I think very highly of confidential computing, because as you mentioned, I used to be the Chief Security Technology Strategist at Intel. And so confidential computing is something very near and dear to my heart, I really do think it’s going to move the needle in a meaningful way to have that hardware backed encryption of data in-use. Isolation has always been one of those techniques that is really powerful in the security space, and I’m excited to see where that starts to go.

I do think as we also start to get a lot of more of these AI-based tools, you know, I mentioned so many of these free tools out there nowadays have AI engines under them, but they’re going to start to become so much more intelligent; they’re going to be able to ingest a lot more data than they used to be able to and make meaningful reasonable outcomes from them.  Hardware telemetry being used to drive security insights on a system. I mean, I get so excited for what that’s going to mean in the future. Because at the end of the day, right, if, if anything’s going rogue on that system, it’s getting computed on the hardware, right? The hardware level will always be that source of truth to tell that something is not happening correctly, at the software level, the OS level, it may be hiding, it may be trying to obstruct what’s happening, they may be trying to make it so whatever tools you have operating software level can’t see it.  You can never hide it from the hardware.  If the hardware can’t see it, then you’re not computing, right, the malicious thing’s not computing. So the more insight we can get into that, things like the hardware telemetry, and how we’ll be able to drive meaningful insight from that, I just think it’s so exciting to think about.

Camille Morhardt  20:26

Anything else?  I don’t wanna cut you off.

Stephanie Domas  20:31

I will go ahead and plug open source at the time, because while it’s not a specific technology, what I am starting to see is there was this pendulum, right where I’m decades ago, open source was seen as the antithesis of security.  From a commercial perspective, you didn’t want open source because you couldn’t trust it, it was just some people in their spare time in the garage writing some code. It wasn’t robust, it wasn’t tested, right? Enterprises didn’t really like the notion of open source. And it started to grow in popularity over the last, you know, decade or so where you started to see more and more enterprise-ready open source.

You’re actually starting to see now this preference towards open source, the fact that I can look inside of this open source gives me more confidence, right, I can audit, I can change, right, I can do what I want, inside of this open source. And so you’re starting to see this paradigm shift, where I’m actually seeing more sectors–even through regulation–now require things like open source. And so I’m also very excited to see that paradigm shift, where now open source is seen as actually a really solid route forward for security.

Because it’s open source, actually, I can make a more informed decision about it; I can control my own fate a little more, I can have more confidence in what’s happening on the system. And so I’m also very biased, because I’m at Canonical. Just that paradigm shift, both in CISOs, and CTOs and just their opinion of open source and how it plays a role in enterprise but also seen in the regulatory space, this now acceptance, and in some cases, promotion of open source as being the route to go in secure solutions.

Camille Morhardt  22:16

What can companies like Intel do to help the broader ecosystem? What are they doing? Well, and what are they not doing right now that they could be doing?

Stephanie Domas  22:28

So I’ll put two hats on here, right.  I’ll put my CISO hat on for a second. And circle back to the some of the things I said before about just managing the level of complexity of regulations and trying to balance what needs to be done for these regulations. So I’ll give an example of like in the event of a security incident, right, the reporting obligations across all of these different countries, if it involves California that has a different use, there’s the US regulations, like there’s the UK there. Every single person has their own right, I am confident right, big companies have crosswalked, they have mapped out all of these things, right? That’s not competitive information, right. So sharing some of these things that just empower other companies to do the right things and meet the regulations. So they can concentrate on doing technical security and not have to spend a bunch of cycles trying to do the same thing that all these big companies have already done. So I would love to see more of that information sharing. Like people talk about security information sharing around like threats and things to attacks. They’re seeing their network, and I think a bit more around the actual infrastructure of these regulations and rules and how do you crosswalk those, and these are the countries I work in.

From the developers’ perspective, again, open source, empowering developers to actually be able to like, contribute and mess with and see. So there’s both the open source piece of it, but then there’s the security documentation and the security hardening guidelines. And you want to empower developers, again, to concentrate on the thing they’re really good at; if you have an AI developer who’s leveraging one of your solutions, right? You don’t want them to have to also be really savvy in AI security, right? You want them to be empowered to be successful with that piece of code without having to also be an expert in these other things. So just like you would spend a lot of time trying to get performance and correctness in the piece of code correct, also, giving that documentation and the hardening guidelines and they don’t have to understand all the attacks out there to say “all right, well, I need to turn these four things on. And if I do that, one, I meet all these regulations that I’m probably going to be up against; and two, I have done the right thing and hardening my solution.” And so a lot of that sort of peripheral, like documentation and guidelines, it’s just lacking in a lot of software.

Camille Morhardt  24:49

And you’re not talking at the super differentiated, competitive level, premium level. You’re talking, “here’s some basic things that really if we could all do them, life would be better. Compute would be safer.” I’ll say it that way.

Stephanie Domas  25:32

Correct.  Yeah, like I absolutely understand the way market is, like, you want competitive products, you want differentiated features. And so it’s not really an ask to get rid of that, right? That’s just not realistic, right? It’s empower people to correctly use the things you have developed, so that they don’t have to be an expert in it to use it correctly.

Camille Morhardt  25:23

Well, Stephanie, you’ve gone from the Intel world to the open source world of Canonical and from hacker to technical strategy, to CISO.  So you’ve run the gamut of protection and defense.

Stephanie Domas  25:38

And now author!

Camille Morhardt  25:39

And now author, yeah. Congratulations. Thank you, Stephanie. Really appreciate it.

Stephanie Domas  25:44

Thank you for taking the time to talk and it was wonderful to see you again.

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

More From