Skip to content
InTechnology Podcast

Why We Will Never Get Rid of Side Channels – with Daniel Gruss & Anders Fogh (200)

In this episode of InTechnology, Camille gets into the world of side channels with episode co-host Anders Fogh; Fellow & Security Researcher at Intel, and guest Daniel Gruss;  Associate Professor at the Graz University of Technology. The conversation covers the relationship between resource sharing and side-channel attacks, common challenges and ways to manage them, AI usage and its impact, securing critical off-earth infrastructure, and more.  

To find the transcription of this podcast, scroll to the bottom of the page.

To find more episodes of InTechnology, visit our homepage. To read more about cybersecurity, sustainability, and technology topics, visit our blog.

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Follow our host Camille @morhardt.

Learn more about Intel Cybersecurity and the Intel Compute Life Cycle (CLA).

Balancing Resource Sharing and Side-Channel Exploitation

Resource sharing plays an integral role in side-channel attacks. Anders explains that the more resources are shared, the greater the risk of leaking data from using those resources, paving the way for side-channel exploitation. However, mitigating side-channel attacks is not as simple as stopping resource sharing. To elaborate, Daniel shares that a common challenge is that resources can be abstract. Side channels can emerge from seemingly innocuous shared resources, such as room temperature fluctuations caused by computational activity. He mentions that efficiency, particularly in terms of economic viability, is also a crucial consideration, especially as computing demands are projected to escalate, potentially consuming up to 20-25% more energy by 2030. Anders and Daniel believe that addressing this challenge involves optimizing resource utilization, which inherently introduces new side channels and security concerns. This means that side channels will never go away. However, they agree that it can be managed to mitigate security risks.

To manage side channels, Anders begins by emphasizing the importance of controlling access to prevent attackers from gaining insights into sensitive information. He suggests implementing access control mechanisms, randomization, and noise to disrupt the attacker’s ability to exploit side channels effectively. Daniel adds to this by mentioning cryptographic techniques such as masking and hiding. Hiding involves increasing the noise level relative to the signal or obscuring the signal amid noise to make it harder for attackers to extract meaningful data. On the other hand, masking involves splitting the secret into multiple parts, making it more challenging for attackers to obtain useful information even if some data is leaked. However, Daniel notes that while these techniques can increase the difficulty and cost of side-channel attacks, they may not completely prevent them, especially for generic problems.

Tackling Side-channel Security for AI Usage and Space-based Infrastructure

Reviewing potential emerging risks, Camille engages Daniel and Anders on the impact of increased AI usage and space infrastructure storage on side channels. For AI, Daniel believes that we are repeating the same mistakes of the past, highlighting a concerning trend where code and data are now mixed within the same AI system. This is prominent in scenarios where users can query AI chatbots and control the system’s behavior. Similarly, Anders explains that while AI enables more efficient bug detection, it not only empowers the defender but the attacker as well thereby raising concerns about the democratization of hacking. Despite this, they both believe that AI will not be a big game changer in side-channel security as it can only identify variations of vulnerabilities rather than fundamentally different ones based on the input it is provided.

Moving on to securing critical infrastructure beyond Earth, Anders identifies two primary challenges: protecting the links to these off-world systems and addressing the impact of the harsh space environment on computing hardware. The latter challenge calls for a rethink of how computers are designed to work in space. Daniel adds to this by drawing parallels between reliability mechanisms in computers and security measures. He suggests thinking about worst-case scenarios and how likely they are to happen, similar to how we think about security threats. By blending security principles with reliability mechanisms, Daniel believes we can make systems more robust and better able to handle challenges in space.

Daniel Gruss, Associate Professor at the Graz University of Technology

Daniel Gruss Side channels

Daniel Gruss is an Associate Professor at the Graz University of Technology. He is world-renowned for having implemented the first remote fault attack running on a website known as Rowhammer.js. In 2018, Daniel’s research team was one of the teams that found Spectre and Meltdown, known as some of the worst CPU loopholes ever found and exist in the vast majority of CPU architecture. His team also designed the software patch against meltdown, which is now integrated into virtually every operating system that exists. Daniel holds a Ph.D. in Computer Science from the Graz University of Technology.  

Anders Fogh,  Fellow & Security Researcher at Intel

Anders Fogh Side channels

Anders Fogh has been a Fellow and Security Researcher at Intel since 2021. He has been into software development since 1992 and is known for his innovation and development in information security, optical media including DVDs, and reverse engineering. Anders was the first to suggest a software-only solution to the infamous Row hammer hardware exploit. Other notable contributions include the first open-source packer, co-authoring the first public generic unpacker for executables in the Windows environment, and inventing the first copy protection for DVD video that can be applied to recordable media and was instrumental in the development of patented video encoding technology. 

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

Daniel Gruss  00:12

I think the biggest question that will always stay in this space of sites is how can I prevent the practical exploitation of the site while not losing the efficiency gains that I get from sharing the resource?

Camille Morhardt  00:32

Hi, I’m Camille Morhardt, your host for InTechnology podcast. And today I had an opportunity to speak with two world-famous offensive security researchers or hackers. Daniel Gruss and Anders Fogh. Anders Fogh with my co-host. He’s a fellow at Intel. And he’s a security researcher there and well known for his work in information security, optical media and reverse engineering. Daniel Gruss is world renowned for having implemented the first remote fault attack running in a website known as Rowhammer.js. Interestingly, Anders was the first person to suggest a software only solution to the row hammer exploit. And back to Daniel, his research team was one of the teams that found Spectre and Meltdown in 2018. Spectre and Meltdown are one of the worst CPU loopholes ever found and exist in vast majority of CPU architecture. So this was a huge find. And his team also designed the software patch against Meltdown, which is now integrated into virtually every operating system. 

Camille Morhardt  01:01

I wonder if Anders, could you start by explaining why what Daniel did matters, why everybody should grasp what this is? 

Anders Fogh  02:08

While Daniel wasn’t the one to find Rowhammer, he has been a protagonist and showing the possibilities of Rowhammer, which is one of two new instances of classes of issues that we have had for a very long time. And the other one, Daniel literally was one of the people that found it and paved the way for it as well. If you think about the last 10 years, those are the two new bug classes that we got. And Daniel was heavily involved in both.

Daniel Gruss  02:38

That’s a nice word, though. I would be a bit more humbly. I mean, especially for Melton, I always like to emphasize that Jung Han, actually was much earlier than we were with discovering this.

Camille Morhardt  02:51

So what actually happened, could you take us back? 

Daniel Gruss  02:54

What happened back then several people were looking in a similar direction. From my perspective, all of this started in 2016-17, when I was looking at the prefetch instruction. This was actually I think, the second or the third time that my research collided with what Anders did; he was working in a company in Germany back then. And he was also looking at micro architecture and instructions and so on. But for prefetch, that was interesting. We used the software instruction on the kernel memory. And somehow it did something on the kernel addresses. And that was very interesting. And from my perspective, that was kind of the start that led to all the things that followed. 

So it was only a matter of time until someone realizes that there is more things behind that and that you can take more control over the micro architecture there, as you can do with speculative execution with Spectre, or if you instead of a prefetch instruction, really try to read the kernel memory there, that during a very, very short time window, you actually get the data, I think Anders labeled it as “the bug was ripe.” That is why there were so many people finding it up around the same time. 

Anders Fogh  04:15

There was a lot of work being done in that space that set the stage for it. Dimitri Pulmanoff did a paper on branch predictors, how they will use it for side channels. That was my secondary inspiration for working in this direction. My own work with Daniel and the prefetch was sort of my primary. It was getting ripe, there was too much research that was leading to the conclusion that there would be more and the missing part of that equation was speculative execution, and we got that in 2017.

Camille Morhardt  04:51

So can you guys explain for people who aren’t as familiar with it just at a relatively high level, what is side channel? And how is it that speculative execution isn’t allowing you to get to the kernel memory, which is supposed to be the most secret area of the CPU?

Daniel Gruss  05:09

I would say for this part, speculative execution is only part of the problem, or maybe one building block that can lead there. The main problem here was that the processor deliberately accesses the cache, even if the translation of the address results in an access violation. Processor finds out. Yeah, it’s an illegal access, there are still cases where the data goes from the cache line into a register where the pipeline then can continue to use it. If we want to put it in easier terms, if you go to the library and try to get a book, you’re not allowed to access, then the librarian would say, “Well, you can’t get this book”–maybe you’re not old enough or something like that. And then the Meltdown bug is, they still hand you the book for a very short moment in time, you can take a look. And then they realize, oh, wait a second, give that back. So there are consequences. And maybe I’m kicked out of the library. But by then I already had a peak and figured out what I wanted to figure out. And in a computer, this is one problem. I can repeat these things over and over and over again. And even if you can only get a very short peak, the computer doesn’t realize that I’m doing this all the time and getting one peek after the other.

Camille Morhardt  06:29

So you’re not downloading information, but you’re accessing information.

Daniel Gruss  06:33

We’re accessing information directly. Yeah.

Camille Morhardt  06:36

What were all of the factors that were leading the world to think about this kind of an exploit that was driving both of you to be working in this space?

Anders Fogh  06:43

So side channels goes back to when I was born pretty much so 1974, something like that. It came up every once in a while, but sort of in the mid 90s, people started realizing that this would break cryptography. And there was a whole large literature bunch from 1995 to 2012, about breaking cryptography. But some of us then started asking, what else can we do with this? And Daniel has worked on key strokes. And I’ve worked on bypassing exploit mitigations and sort of pushing the limit of what you could do with it. And that ended up opening new doors for speculative execution and Spectre, Meltdown and pretty much everything that came after it. slowly expanding the area were a known phenomenon has hadn’t been studied before.

Camille Morhardt  07:38

And what is a side channel?

Daniel Gruss  07:43

Yeah, there are different definitions of what is the channel is constantly fighting against other academics, they are about the different nations. A definition that I really like is that you obtain metadata– any kind of metadata that is derived from some data that you actually want to have. And although you can’t just recover the data from the metadata because this is a lossy process, typically. You stay use the metadata to infer the data. For example, if I can take a look at your network packets, I maybe cannot see what’s in there because they are encrypted. But because of the size of the network packet–which is just metadata of the actual data that is transmitted–I can infer which website you’re visiting, which video you’re watching, things like that.

Camille Morhardt  08:35

And then from there, are you looking for an easier path to kind of hack in or gain more information about me, or are you stopping with the metadata?

Anders Fogh  8:44

You extrapolate from the metadata to data, right? So what speculative execution does is it gives you control about what is basically leaked into the channel. And once you have control, you can start putting things into side channels that correlates directly with data. Imagine having me break into your car and read your odometer right? Now, you go for a ride after I did that, and come back and I read your odometer again. Now I can start arguing about where you went, right?  So this is what we’re going from the metadata on your odometer to actually figuring out where you were.

Daniel Gruss  9:23

But with speculative execution, as you just said, you get the control over it. So instead of now just saying, Oh, maybe the person went here or there, what we actually then can do for a very short time, we know we can say, “okay, the secret is a one, let’s let the car drive for one kilometre. Secret was a to drive two kilometres.” 

Anders Fogh  9:44

I can make you take a different route. 

Daniel Gruss  9:47

Yes. 

Anders Fogh  9:48

Basically, it was speculative execution. And that allows me much better to say, from your odometer, where you actually went, say, you go a kilometer north, and then the trip becomes shorter, well, then I know you went in that direction. If you go south, your trip gets longer and I know you want it to go north. And that is what speculative execution doe s for the example. Right? It allows me to get you to do detours.

 Camille Morhardt  10:14

So if you’re looking at just to take a really specific example, if you’re looking at the size of a network packet, and you’re trying to discover, are you trying to discover, say what website I visited? 

Daniel Gruss  10:23

For instance, yeah. 

Camille Morhardt  10:24

How exactly are you honing in on that? And what is the ultimate goal, then? Is it to get access to my account on the website, or just to know what website I’m visiting? Or?

 Daniel Gruss  10:35

It always depends, right? My interest as a researcher, or professor at a University is not to hack into any one system, but to show the problems show what malicious actors could be doing. For a site general, I would say, if I’m figuring out what website you’re visiting, maybe that’s already valuable information. But it always depends on what I can do with the specific sites. And there might be site channels that I can use to leak passwords or password hashes or other type of credentials, and then use that to gain access to a system.

Camille Morhardt  11:10

So kind of moving forward, what are the types of attacks that you guys are looking at now?

Daniel Gruss  11:16

I think at one point, I would like to pick up, Anders mentioned that there was a lot of momentum that was picked up in the 90s and then also in the 2000s. This move from there are side channels but no one is using or only very few actors are using them because it’s not clear what they are useful for, towards there are use cases where we use cryptography every day, chip cards, for instance, authentication methods, banking, then the movement towards the internet. We had the internet where suddenly, adversaries have access to millions of machines to basically every person on the planet if they want to. So the question then is more about the threat model, why were all these attacks not discovered in the let’s say 80s or 90s, even if they had existed by the technology, they would not have been relevant. Because there was no threat model, there was no way to get the malicious code in that scale to the computers that we have today. Today, if I write a malware, it’s comparatively easy to roll it out to even hundreds of 1000s of machines.

Camille Morhardt  12:22

Is that because all of the machines are running on the same architecture or because all of the machines are plugged into the cloud or the internet?

Daniel Gruss  12:30

I think the main culprit here is the Internet. The Internet was a mistake, right? We wouldn’t have all these attacks; we would have basically no ransomware attacks. If we wouldn’t have the internet. 

Camille Morhardt  12:42

Is there an option to create a second internet that doesn’t have the same kind of access opportunities?

Anders Fogh  12:50

I would say no. The entire story is more in the direction of there are only so much we can do for security. Right? If you take the analog example is people have been producing locks for the past, what 5000 years by now, starting with the ancient Egyptians; and people found out how to pick them and you can still pick most all lugs pretty easy. It has gotten a lot more difficult for sure. The Egyptians locks were pretty primitive. And that’s what we’re seeing in the internet as well. Right? In the beginning, hacking of the internet was not a big deal. And now it has gotten a lot harder, but we’re not getting rid of lockpicking.

Daniel Gruss  13:25 

Yeah, not in 6,000 years, huh? Yeah, I would agree. Asking about like, what is the next thing? I think we will continue with this trend of remote attacks, I think we will see more and more remote attacks. And that is the challenge that I would see that we need to tackle. People say remote to a lot of different things remote could be someone just sending your network packets, remote could be JavaScript running in your browser, remote could even be an application that you insult from a remote server. So there’s a lot of variety of what a remote adversary can do and what kind of access it has to your machine.

Anders Fogh  14:18

I think that’s true. And I think where side channel has had the most impact so far is its local attacks. The Spectre and Meltdown are mostly local attacks. For many of the researchers out there, the holy grail is to come up with practical attacks. And I’m not sure that people are there, they’re moving in that direction. We’ll see what can be done in that space for sure. Yep. And what can be prevented. 

Daniel Gruss  14:46

But in that sense, even meltdown inspector, you could argue where it says irrelevant. Of course, if you have a remote adversary, the adversary will not be sitting in your room for that probably. 

Anders Fogh  14:57

True, but he would get local access. Have to have access to a computer first, right? So you need a chain of things– 

Daniel Gruss  14:06

Which brings us back to the question of what does remote actually mean, if I have JavaScript that I execute on the computer is it still a remote attack? People label it as remote attacks. But actually, I have local code execution inside the JavaScript sandbox.

Anders Fogh  15:20

I don’t think it’s a one definition, right? It’s more like a degree of remote–degree of separation really, right. There are many degrees of that kind of separation happening over the internet.

Daniel Gruss  15:35

For instance, also in the cloud, on your personal computers, you have JavaScript that is sort of isolated from the rest. And in the cloud, you have these virtual machines that are isolated from each other. It’s a pretty similar scenario there. And we are facing the same problems here. This is the other part where scientists have been exploited a lot in cloud scenarios from one virtual machine to the other.

Camille Morhardt  15:55

Because the infrastructure is sitting in proximity to itself. And then you have different companies with different information or different individuals with different private information but in physical proximity.

Anders Fogh  16:07

The story about side channels is a story about shared resources. Yes, right. So So in my example, from your odometer before, I had access to your odometer, we shared it, I could look at it, and so could you. And you were using it, and I was looking at it and from that I deferred information about where you went, dad, sort of the story of site channels. So the more resources you share with other people, the more you will leak data from using those resources.

Daniel Gruss  16:35

Now, the obvious solution then is we don’t share resources anymore. But that is very easy said but very difficult to do, because resources can be very abstract.  There have been side channels, for instance, where you have two computers sitting in the same room and because of the computations from one computer, how they influenced the room temperature, the other computer can send this change in the room temperature, and can infer some secret that was transmitted. In this case, the room is the shared resource. So what do we want to do? Not have any cloud service anymore with virtual machines on the same server, and then also not data centers anymore, but every server in a separate physical location? It would get ridiculous at some point.

Anders Fogh  17:22

One has to acknowledge that resource sharing, if they are not fully utilized, is economically much more viable than duplicating resources. And that’s sort of the high-level point about why we’re never getting rid of side channels; we’re not willing or capable of just separating everything. And that means that side channels will always be there. That does not mean that side channels cannot be managed as a security problem, right? We got rid of Meltdown. We don’t have that anymore. That is basically us managing how we do things, right. So we’re not getting rid of sanctions, but we can manage them. In fact, I think we can do a pretty good job of managing them. 

Camille Morhardt  18:03

So what are the key ways to manage them?

Anders Fogh  18:06

I think a lot of it has to do with control over the side channels.  The more control an attacker gets the more he’s capable of figuring out what you’re doing. And preventing control is something that can be done as control access control mechanisms that can be done. There is randomization noise, there’s a ton of things you can do.  

Daniel Gruss  18:28

There are best studies actually in the area of cryptography, commonly discussed as masking and hiding. We’re hiding is basically you increase the noise floor relative to the signal, or reuse the signal relative to the noise floor. And masking works a bit differently where you split up the secret, so even if you leak something, then you would only have half of the secret and you can’t do really much with it. Even there, it’s still possibly to mount an attack, you just increased the costs for an attack the number of traces the number of measurements that you need to perform, the amount of time that it takes–even the cost for the analyzes the computation costs–this increases quadratically typically or exponentially, then if you go to higher order masking, that’s possible. But it’s only an increase. An attack will always be possible. And the even worse part is that these techniques are pretty specific for specific computations. And applying that to generic problems that we have with sites and it’s everywhere, is not even really clear how we would do that.

Anders Fogh  19:34

That said, there’s tons of nuance here. Right. So we killed Meltdown. And that was because we had an implementation that we could change, and it would just completely break that particular side channel. So these are things that are very often practical, and sometimes not–it depends on the details, the goal of the technology, and all these kinds of things. And sometimes we can actually completely break these things. But as a general rule, we share resources and that is the sort of the key problem; doesn’t mean we can’t manage them, and sometimes, in some cases, kill the worst of them. Right. And that’s what I spent most of my time with: killing side channels.

Camille Morhardt  20:16

Can you guys talk a little bit about your expectation for how introduction and increased use of AI is going to change, I suppose the threat landscape across software and hardware?

Daniel Gruss  20:28

I think we’re repeating the same mistakes that we did in the past. We decided at some point that it’s super convenient to mix code and data together in a computer system, and not separate those. Now we’re doing exactly the same thing again. We’re mixing code and data that we send to these AI systems, they are being used already by websites in the background where you can query AI chatbots, and suddenly, commands and data are in the same channel and under the control of the user. We’ve been that we had done all these mistakes, here we are, and we’re still suffering from the decisions back then.

Anders Fogh  21:08

I think there’s lots of nuance on what AI is going to do. So AI is going to enable finding bugs more efficiently and that plays to both sides of the equation. Right? The attacker is more able to find issues that he can use, but the defender is too.  There’s a larger economical question in it. Right. And I think Microsoft is capable of investing significantly more in AI, and AI security technologies, then say your average exploit vendor.  That said, NSA will have very significant capabilities on their own right. So it de-democratizes hacking, right, to some extent, what the net result is, I don’t know. 

Camille Morhardt  21:58

I mean, is the best defense, sort of the offense, like you were saying Anders of just generating noise and using AI to kind of generate additional information or additional noise?  How do you start to defend against it when you have AI-level attacks?

Daniel Gruss  22:13

I don’t think AI will find fundamentally different attacks, AI is not generating something fundamentally different from the inputs that we provide to it–even if it can do variations of it, and even quite creative variations if you want to call it creative. But still, all of this, like if we are training a neural network to produce language, it will produce language that sounds like the language that was fed into it. If we feed into it, how to find bugs, how to build attacks, then it will find similar bugs and attacks and anybody do so much more efficiently than humans. That is what I would expect. But it won’t be able to find bug types or bug classes that are completely different in its nature. 

There have been many attempts also in academia to build systems to find new bugs. Fuzzers have been very popular. And they also they mutate the inputs, they mutate the test cases in a way so that they can come up with new exploit chains that then find new problems. But then again, if you run these fuzzers after some time, they don’t find a lot anymore because they covered all the area that is there to find. And then for the new problems, especially if you take a look at sites and if you have a fuzzer that can find one type of site and it won’t necessarily find other sites, And most likely, it won’t. 

Anders Fogh  23:45

And I’m not sure that AI is going to be such a big game changer on security. I think in some areas of security, surely, sort of finding the bugs that we do know. We have to realize AI in itself adds very, very significant new complexity, like four billion parameters in a model and adversarial attacks on that model, as well. It adds a whole new set of insecurity to the world, as well.

Daniel Gruss  24:14

Yes. Especially as we connect these to the internet now. I mean, ChatGPT can access web service? I’m not sure that’s a good idea. I just tried that out with my own web server. And I tried out which queries do I have to send? And what is the maximum frequency with which I can reach my own web server from ChatCPT. It can really live access this web server, but it needs some convincing. 

Camille Morhardt  24:37

So what do you think, Daniel of LLM or generative AI, what is going to be the approach to attack it? It is shared resources, for the most part.

Daniel Gruss  24:49

Yeah, but shared resources, the thing is, the largest part of this LLM is very static, right? This is trained by the company and then of course, they will stay tweak a bit here and there. And the larger part is, they will re-train the model at some point and generate a new version. We’ve seen attempts to attack these, for instance, by poisoning the data, by providing incorrect data that is hidden from the human eye. But then the LLM learns that and believes that is the truth. Sure, we will probably see that maybe that’s the new form of search engine optimization, can call it LLM-optimization, then.  My company is the first thing you will find if you enter related query on ChatGPT. Sure, that the question is, what does it really change? 

Camille Morhardt  25:37

So in your opinion, should enterprises be adopting LLM? Or is it make them too vulnerable?

Daniel Gruss  25:43

I think, yes, I think they should. But at the same time, we will have new kinds of employees probably same as we at some point get quality assurance, we will then have also more quality assurance for this part.

Anders Fogh  25:57

I heard some very interesting takes on LLMs doing programming. And the take is that the difficult part of programming is not writing the code. And that’s why people spend most of the time with this today. And I think that will be taken over by AI. The question that is really important for writing code is, what problem am I solving? And what does the solution look like? And being very specific about that is  what people struggle with all the time? And this is where we have the bugs come in, right? This is not the people not being specific about what they actually want? I don’t think that MLMs are capable of helping there. How should an LLM know exactly what it is that I want what I’m thinking of? That’s the key limitation of LLMs, if you ask me. This is not just a gold rush, without gold, there’s going to be fine with a bird. It is not a panacea, it will change the world. But some fundamental things will not move much.

Daniel Gruss  26:58

But then again, the most besides way to specify what you want as a programmer is to write down the code. 

Anders Fogh  27:06

And you’ll find very often you don’t get what you want.

Camille Morhardt  27:10

So if resources are you guys were saying a lot of shared resources are essentially what led to the exploration of side channels to get additional information, shared infrastructure, and you talked about this is just economically the most viable approach right now. Are there other approaches and development where would the pendulum sort of swing back but in a new way with like machine learning where people are kind of having their own models in their own location with their own data, sort of sequestered?

Anders Fogh  27:42

It’s the entire story of compute really. We went from IBM saying that six computers is enough for the entire world, till everybody has a computer on the wrist, and went back from that to, “hey, let’s use the cloud computer,” of which there are what 15 like large providers or something like that, right? It goes back and forth all the time. If you think of it, the cloud is a great way to suppress side channels, because an attacker does not only need to have a side channel, he also needs to find you in the cloud. And that’s a big problem for them. There’s 1000s and 1000s of machines in the cloud, and finding you on those machines is a very, very significant problem. 

Side channels are one of these beasts.  It’s an emergent property of compute really, and of many other things, too.  There’s no rules or anything they follow. And they are rarely considered at design time. And this means that they change as technology change. And when we see what how a real quantum computer looks like, we can start speculating about what the side channels are. And it could be better, it could be worse, it could be all kinds of things, we just don’t know. 

We’re starting to learn about what side channels in current computers means and how we can minimize their impact. And we’re getting good at it. But the sanctions and computers were not designed with them. Those are emergent properties of other things like, hey, we can share resources and make them more efficient and cheaper computers.

Daniel Gruss  29:21

I think that’s a big and relevant point, it’s about efficiency or its own efficiency always has the economic component, but also looking into the future, where it’s expected that compute will consume more and more energy predicted to be around 20-25% by 2030. We have this big problem that we need to improve the efficiency. And how do we do that by sharing hardware more efficiently by sharing resources more efficiently. And that inherently opens up new side channels and new problems. This is not necessarily bad, but there has to be this constant battle between making things better in a way of efficiency in economic terms, and at the same time, taking care that security still is at a reasonable level. And this is a constant battle. We introduced the internet. Okay, that means we also need to do more for security, we move everything to the cloud, okay, then we also need to think about what security means here. With every level that we introduce where we share something, we need to think a bit more about security. 

Camille Morhardt  30:29

So what are the most important questions that researchers should be asking right now? 

Daniel Gruss  30:36

I think the biggest question that will always stay in this space of side channels is how can I prevent the practical exploitation of the side channel while not losing the efficiency gains that I get from sharing the resource? And it’s not easy to answer, not generic to answer. There’s always the thing when we find the new side channel and we have to ask this question, how do we stop it without losing everything at we gained?

Anders Fogh  31:05

There’s polarities to be managed in this space and security in general, and everybody’s doing and don’t like to talk about it but that’s the reality of things. That ransomware became a thing because of internet, but that doesn’t mean the internet is bad. Should we figure out how we can do an internet where ransomware in place a lesser role? Absolutely. But it’s a difficult job–both technically and politically and in all dimensions. And side channel is a microcosm of that, right?

Daniel Gruss  31:35

And will we ever reach an internet where ransomware is not possible? I doubt it. Still I think we’re moving ahead, we’re making it more difficult for ransomware to spread; we’re making it more difficult for people to develop ransomware This is a slow process, slow progress that we make there. But as Anders said, rolling back the internet is no real alternative.

Camille Morhardt  31:59

So what do you get excited about right now, Daniel?  What are you working on late at night these days?

Daniel Gruss  32:05

Well, I’m, now a head of a research group. Well the last two nights, I was busy with correcting exams– 200 students in an information security class. Which is also something I sometimes also talk about these things with my wife and we are asking, “okay, with what do we have more impact with our research or with the teaching?” And we both agree that actually with the teaching we most likely have more impact on society because these are hundreds of students every year that we train, that we show how they can build better systems more secure systems, what best practices are out there to build complex systems. And I think that’s the bigger impact.  So correcting exams, writing grant proposals to fund my research group; then what else do we have yes managing the group of course, there are people in the group, and they have different needs. They need support with paper writing, maybe, support with some other administrative tasks. Yeah. There are all kinds of things to manage.

Camille Morhardt  33:17

But what is your goal? Is it having an impact on society? What’s your ultimate goal?

Daniel Gruss  33:23

I think in the end life is just a very short movie and you want this movie to be nice to watch afterwards. And having a positive impact on society would be one aspect. And you can have that, for instance, by educating people. There is this this thing that a lot of people say, “I want to leave the world in a better shape for my children than I found it.” Right? We are doing a pretty, pretty poor job at that. Very, very poor job at that. We’re leaving the world in a technology wise much better situation than before maybe. But, overall, we are creating a lot of problems that we are not solving. I want to bring knowledge out to the people and this can be knowledge in forms of education or knowledge in form of research to the community.

Camille Morhardt  34:14

Has that always been kind of your driving force, or is that… 

Daniel Gruss  34:18

For a long time. I realized that I really like to teach already in school. And then during my studies also in here in Graz, they have a very nice setup where when I was in the 3rd semester of my undergraduate studies, I was already teaching smaller classes with like 30, 40 students, which is very nice and you can learn teaching with that.  And at some point when I was finished with my masters, so I did the bachelor’s/masters, I went to the head of the Institute where I was at and asked him, “okay, so I really like teaching. Can I please continue?” And he was like “no, you can’t. You finish your masters you have to go to industry.” I was like “yeah, but there are these older people here also not the professors but in between and they still do some teaching.” And I was like, “ah, you mean the PhD students?” That is the worst motivation that I ever heard to start a PhD.  I started a PhD, and, it worked out well. And that’s how I got here.

Camille Morhardt  35:23

How old were you when the Internet came into your life? And when like, how did you get your start? Did you have grow up with computers all around you all the time?

Daniel Gruss  35:32

Yeah, a bit. So I’m born in 1986, and we always had computers already around. When I was in primary school, at some point we had a c64, we had 386 then.  But at some point, I think that was around ‘96, ‘97, that’s from that point on I had my own computer and I could do a lot of stuff there. And then ‘97, ‘98 was then when I, also had access to the Internet and had it since. So maybe the age of 10, roughly.

Anders Fogh  36:08

Oh, I didn’t even make it to my twenties when I got to the Internet. I started with bulletin boards, and would that have been ‘90-‘92ish? So that was 50, but it wasn’t Internet. Just well, the Internet existed. But it was not–

Daniel Gruss  36:26

It’s kind of the Internet.

Anders Fogh 36:28  

It was the real precursor for the Internet, really. So I’ve been a computer nerd pretty much as long as I can remember. My dad was sort of the original generation of computer nerds. He still tells the story about how great it was with his first calculator. So it’s just been a part of my life always. I just grew up with computers.

Camille Morhardt  36:49

And you grew up in Denmark, right?

Anders Fogh 36:51 

Right. 

Camille Morhardt  36:52

I have a little bit of a non-sequitur for you guys. As we start housing more of humanity’s critical infrastructure off the Earth, right? I look at satellites and, you know, we’re also looking to build permanent base on the moon so we can do further space exploration. How do you see securing that environment or mitigating a problem that might occur in those kinds of environments?

Anders Fogh 37:20

Yeah. That’s, That’s really two questions. The first question, I would say, is how do we protect it? And that’s about protecting the links. Right? Nobody’s gonna take a trip to the moon to break into a computer, but there will be some kind of link to use the computer on the moon or wherever it is, tight? And that’s gonna be a challenge. I know people that work on satellite security, and you don’t just reboot a satellite, and that makes for a lot of problems. So that’s gonna be a real challenge. The other is the environment. The environment are very hostile to how computers work. There’s large temperature swings. There is a lot of radiation out of space. Those are really the enemies of computers, and we’re gonna need to fundamentally redesign some of the things that we do in computers to make it happen.

Daniel Gruss  38:17

I think this, sort of aligns a bit with security. At least I’m trying to do that. Maybe I haven’t convinced Anders yet; but, one thing that we are trying because of well we have these reliability mechanisms in in computers already for a long time, for instance, error correcting codes. And I believe that we can do a lot more there with security. From my perspective, we had reliability mechanisms for a long time And the typical threat model is, let’s assume, a standard scenario, no adversary, we assume there can be this many bit flips, but not more than that. The way the assumptions are made are completely different than in security.

And in security, we try to approach this more principled. “Let’s assume the worst case. What is the worst case? What is the probability that someone succeeds in that case?” And I think a bit of that thinking could also help in those scenarios. If I know that, the probability that even an intelligent adversary will not succeed with or will only succeed with a super low probability, then I think we can also benefit from that in situations where there is no adversary.

Anders Fogh  39:28

I think of reliability to as a subset of security. I think —

Daniel Gruss  39:31

Yes.  But don’t tell it to everyone. Some people really hate that statement.

Anders Fogh  39:38

For sure. I think Rowhammer is the thing that really hammered that point home. Right? Then if you can’t rely on the components in your computer, you can’t rely on your computer.

Daniel Gruss  39:49

Yes.

Anders Fogh  39:50

But in the end, it’s gonna be a question of probabilities. Right?  The sun has a sunstorm, and the radiation climbs above what is reliable and then weird things happen.

Daniel Gruss  40:02

Yeah. And, I mean, they have they have more specialized hardware up there that is clear. But even in that situation, I’d rather have a system that doesn’t continue with corrupted memory and takes a bit longer for the computations, than the other way around.

Camille Morhardt  40:18

We’ve been talking with Anders Fogh, Intel fellow and security researcher as well as Daniel Gruss, professor at Technical University of Graz and also a world-renowned hacker. We’ve been all over the map with security and vulnerabilities and anticipated vulnerabilities. Thank you both so very much for your time. 

Anders Fogh  40:39

Thank you.

Daniel Gruss  40:40

Thank you.

More From