Skip to content
InTechnology Podcast

What That Means with Camille: Energy Efficiency in the Cloud (148)

In this episode of What That Means, Camille gets into energy efficiency in the cloud with Lily Looi, Intel Fellow. The conversation covers how data centers consume energy and how to improve sustainability and energy efficiency in data centers.

To find the transcription of this podcast, scroll to the bottom of the page.

To find more episodes of InTechnology, visit our homepage. To read more about cybersecurity, sustainability, and technology topics, visit our blog.

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Follow our hosts Tom Garrison @tommgarrison and Camille @morhardt.

Learn more about Intel Cybersecurity and the Intel Compute Life Cycle (CLA).

How Data Centers Consume Energy

In the last five years, data centers have gone from consuming 1% of global electricity to now 4%—and growing. The need for more data centers is rapidly increasing due to new AI needs, cloud computing and cloud storage for business, and new technologies that rely heavily on the vast amounts of data that data centers can process. This exponential growth of data centers has naturally led to more electricity consumption and the need for more efficient use of that energy.

Lily explains how in every data center, energy consumption can be broken down into three distinct parts: cooling, powering the servers, and running the platforms. Cooling refers to cooling the data center buildings through air conditioning, while servers also need dedicated cooling methods. The need for stronger processing power means more energy is needed to run the servers, and the software that runs on the servers and platform can also affect how much energy is consumed.

Improving Energy Efficiency in Data Centers

It’s clear that data centers aren’t going anywhere and will only become more necessary as technology advances. This means engineers need to get smarter about how data centers consume energy. Some power-saving methods already in place include turning off servers when they’re not being used, turning down or lowering frequencies of power used for servers that need to be on but don’t need to be operating at 100% utilization, and load balancing methods to run servers as-needed when renewable energy use is high. There are also many innovations going on for cooling methods of data centers as well as servers, such as liquid or immersion cooling.

The main drivers of improving energy efficiency are reduced operating costs by using less electricity, improving the overall performance efficiency of data centers, and reducing carbon footprint. When it comes to carbon footprint reduction, optimizing for energy efficiency can take place on the hardware, software, individual chip, platform, and entire data center levels. Lily notes how the goal of these efforts is not to stop data center use but to just make sure data centers prioritize energy efficiency right now and in the future for better performance and better sustainability.

Lily Looi, Intel Fellow

Lily Looi energy efficiency data center cloud computing

Lily Looi is an Intel Fellow in the Data Platform Group, as well as the Chief Power Architect of Intel’s Xeon product line. She has been with Intel for over 30 years, starting as a Component Design Engineer in 1990 and serving in a number of different engineering roles since. Lily has an engineering degree from the University of Michigan, Ann Arbor, and she holds over 60 patents.

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

[00:00:36] Camille Morhardt: Hi, and welcome to today’s episode of In Technology, What That Means: energy efficiency in the Cloud, where we’ll get into power consumption in the data center with Intel Fellow Lily Looi. Lily Looi is Chief Power Architect of Intel’s Xeon product line.

Okay, so we were gonna talk about power consumption in the data center being servers basically; and servers exist on premise within companies. They exist in cloud environments–which means essentially server farms that sit outside of companies–and then they exist in hyper-scaler environments. These sort of massive, massive server farms, housing, tons of information.

Just to set the context, can you describe, basically, what kind of information even goes into a data center just at a very, very high level? Because I wanna know that before we get to how much power consumption it even uses in the world.

[00:01:37] Lily Looi: I f you think of the internet, it’s all run off of different data centers.   So if you’re, you’re shopping, you would be logging into a data center and picking out your product. Putting in your order securely with getting your financial information across. So it, it’s really anything you can think of as an internet.

Another would be if you’re renting out a cloud instance and you want to use it like your own server to process your company’s records or whatever you, you can rent out a cloud instance from a data center.  It’s really pretty broad in terms of what goes in and out of that, but really everything comes in through the network and it gets processed and stored off somewhere. So there’s a lot of data flowing back and forth, and a lot of data being processed.

[00:02:21] Camille Morhardt: Okay. So some data’s processed at the endpoint or my PC or on my mobile phone. But there’s also a lot of data that’s being processed actually inside the data center.

[00:02:30 ] Lily Looi: Right, right.

[00:02:31] Camille Morhardt: Okay. Not just stored there, but also processed there.

[00:02:34] Lily Looi: Right. And actually anything, like if you talk to Siri, that’s actually processed at the data center.

[00:02:41] Camille Morhardt: Oh really? So I’ve heard numbers like 1%, 2%, maybe it’s grown to as much as 4%.  These are just Google searches, but how much total sort of, of the total global power consumption that exists in the world, what are the numbers that we tend to agree on as a range for the amount that data centers use?

[00:03:03] Lily Looi: Yeah, I think most people think the number’s about 4%. So 4% of the global electricity is being used by data centers today; and that’s up from 1% from maybe five years ago.  So it’s growing and it’s expected to continue growing.

[00:02:19] Camille Morhardt: Uh, how is that power being used? Is it in processing or is it in cooling?

[00:03:23] Lily Looi: So if you look at a, a data center, it’s got a big building, they have to cool it, air conditioning, all that, that is about a third of the data center’s power. Another third of the power comes for the many, many servers that are in that data center. The CPU   and the cooling of the CPU is another third of the power. And then the final third is for the remainder of the platform. So the networks and the, the storage and the platform.

[00:03:49] Camille Morhardt: Okay. And so, um, is it being addressed, I mean, it seems like the, the amount of data centers’ processing that’s occurred over the last, you said, I don’t know, five years or something, it’s grown to 4% for power consumption.  I am, I imagine a lot more than 4% increase in the amount of workloads being processed in the cloud within that same timeframe.

[00:04:11] Lily Looi: Definitely.   And yeah, the pandemic just accelerated that right in, in terms of not just data being processed and people doing online shopping and online YouTube and searches, but also in terms of the amount of machine learning and AI that’s being done that Siri is, is one example.

[00:04:29] Camille Morhardt: Mm-hmm.

[00:04:30] Lily Looi: So yeah, the, the amount of compute, and I don’t have an exact number, the amount of compute has grown tremendously more than that. 1% to 4%. The servers that we’re putting out the performance has grown by leaps and bounds, and the energy efficiency has been improving, but not at the same rate.

[00:04:48] Camille Morhardt: Not at the same rate as a performance, you mean?

[00:04:50] Lily Looi: Right, right. But just in terms of the amount of work a server can do, it’s grown a lot in that same time.

[00:04:57] Camille Morhardt: So what has been the drive to actually reduce the consumption of energy, if it has been? Is it purely economical? Is there another kind of a drive?  Like what’s, what are the motivating factors in optimizing that? I know with like mobile devices, battery life, is huge.

[00:05:11] Lily Looi: Yeah, yeah, for sure. So for a data center standpoint, reduced electricity means you can put more servers into that same data center. And so the geographic footprint matters, and of course, electricity, you have to pay for it and at these rates it’s significant, the amount of electricity. And then finally electricity that you’re burning if you’re using any kind of carbon to generate the electricity–which is still, you know, a lot of the electricity in the planet generates carbon–that’s adding to the carbon footprint. And sometimes you need to pay a carbon offset and of, of course, you know, you wanna save the planet.  So from a good for the planet standpoint, less electricity is good.

[00:05:52] Camille Morhardt: And what are the main ways that this is being addressed? I thank you for breaking down the thirds and you also didn’t mention software, but aybe plays a part in controlling all of those thirds, as well.

[00:06:03] Lily Looi: Oh, a absolutely. Software is a big influence.  So if you’re running power-hungry software, it would be burning that power in both the CPU and in the rest of the platform–the power delivery and the networks and, and things like that. So yeah, software is what is running on the data center and inefficient software will burn more power. Good software will be more efficient.  And then software for sustainability or for reducing carbon footprint can actually help the problem. There’s uh, different tiers of software.

[00:06:34] Camille Morhardt: Well take us through each one of these. I mean, take us through  the CPU and the cooling of the CPU. Like what is being done on that level to help address that or optimize power consumption?

[00:06:44] Lily Looi: Okay, sure. So a CPU is very complex. In a data c enter, each CPU has lots and lots of cores and it has, you know, memory interface and IO interface and lots of cache. So it’s, it’s very complex.  And the thing is first, you want it to be running most efficiently while it’s in full operation. And for that, there’s circuit techniques and there’s process techniques; but a lot of time pieces of it are not being run at full speed or may maybe not even being used.

For example, you may not be using every single core all the time. So one of the very common ways to save power is turn it off if you’re not using it. Just like it in your house, if you’re not using your lights in a room, you, you turn off the light. So that is one power saving method.

Another power saving method is if you’re using it lightly, turn it down. So instead of running it full frequency, maybe run at a lower frequency. Now all of these power saving techniques of course, you have to be careful not to just turn things off and turn things down to the point where it takes a while to turn it back on.   It has to be weighed against whatever kind of performance impact there could be. You want the intelligence to know when it’s okay to turn it off and when it’s not. So like you just left your room for a minute to go grab something and come back. You may wanna leave the light on.

[00:08:04] Camille Morhardt: Aren’t servers more efficient though the more of their capacity is running? Does the, does the efficiency, you know, increase the more they’re being utilized, if that makes sense?

[00:08:14] Lily Looi: Sure, like why don’t we just run full on all the cores all the time?

[00:08:18] Camille Morhardt: Well, I’ve heard that you get better efficiency in doing that, say, than having two servers at 50%. It would be better to run one server at 100%, like in terms of power optimization

[00:08:30] Lily Looi: In terms of efficiency?  Yeah, that’s true. Although there is, there is some argument with the race to halt, so you, you try to finish your work and then turn it off type thing. But yeah, in terms of the efficiency, the thing with the data center is you need to plan for the worst case. So like during, the Christmas holiday season–or in China, you know, the big shopping seasons–you’re gonna see a spike. You wanna make sure everybody has good performance, so you’ll be running 100% there and it’ll be, you know, fully utilized.

But let’s say it’s the off season, there’s less shopping, you wouldn’t need to necessarily run everything a hundred percent, but you’d need to have your data center set up so that you can handle that peak. A nd in the off seasons, maybe when there’s not so much shopping going on, or maybe at night you could even turn some of them off or turn them some servers to a lower power state and the ones that are up maybe have the things that aren’t running full speed, have them off or turned down.

[00:09:31] Camille Morhardt: Could you like load balance so that you’re saying, well, “okay, you know, the biggest shopping in Europe is this time the biggest shopping in China is t his time in India, it’s this time? Or do you have to have regional data centers all at the full highest capacity possible for any given shopping because of latency otherwise?

[00:09:50] Lily Looi: That is a good question and you could; it takes some smart software to move things around. If you’ve got a data center in India and you know there’s gonna be heavy shopping in the US, it would take software to, uh, have it go there. And that is one technique. There are some pros and cons to having it geographically so far away, um, in terms of speed.

But yeah, software can help you optimize that and move the traffic around. And that could be a way of load balancing, which would help make more efficient use of data centers.

[00:10:22] Camille Morhardt: Okay, so what about, you said another third is storage networks, other kinds of processing; how are those being addressed from a power consumption perspective?

[00:10:34] Lily Looi: My role is more on the CPU side, but a lot of the platform, network interface and, and memory, those do interact with the C P U. So there’s a lot of what I call “platform features” where they work together to turn off like a link if it’s not being used or turn it down, or half width or lower speed, something like that.

[00:10:56] Camille Morhardt: Okay. And then, uh, cooling was another third. Do you know how they reduced that?

[00:11:03] Lily Looi: So there’s, there’s two pieces of cooling. One is the cooling of the CPU, and so that gets lumped into that third. And there are different types of cooling.  Like you could use bigger, more efficient fans that would cool at a lower power or liquid cooling is starting to come out.  Or even, you know, the most extreme of immersion cooling. These are very efficient cooling methods for the CPU.

And then cooling at the data center level–that would be air conditioning– and that would be having more efficient airflow in the room or a more energy efficient air conditioning. But I have seen data centers where the air flow in the room is actually directed so that the cool air blows through and then out through a vent.

[00:11:49] Camille Morhardt: Well, a lot of the large data centers are located in like physical geographic locations that have inexpensive, like renewable energy.

[00:11:59] Lily Looi: Mm-hmm.

[00:12:00] Camille Morhardt: And that’s one way that they’re trying to

[00:12:02] Lily Looi: Yeah. Yeah, that’s true. I mean, you’re still using the same amount of electricity, but it’s green electric. So for, for example, somewhere with lack of lot of sun and then have solar. Even on that front, it’s still the same amount of electricity, but it’s not the same carbon that you’re putting in the air.

And then getting back to the topic of software orchestrating the data center and making more efficient, if software knows the time of day, then it’ll know that, oh, the data centers in this time zone have the most sun right now, so I’ll schedule more work over.

[00:12:38] Camille Morhardt: Is there anybody putting together like some kind of modeling of that so that you’re not sort of hard coding, “well, this time of day and this time zone, but you’re actually looking at like dynamic wind conditions or weather conditions and then load balancing based on that?” Or maybe somebody’s making that and then providing APIs, so developers doing some other completely other thing can get greener?

[00:13:01] Lily Looi: Oh, sure. Well, at Intel we have a, uh, an initiative going on with exactly that, APIs and, and standardized software to help with both the load balancing and also just writing more efficient code.  It’s not my area of expertise, but I work with the folks that are putting that together. So, yeah, it’s, it’s definitely, uh, an area under investigation.

[00:13:21] Camille Morhardt: I did interview Asim Hussein, director of the Green Software Initiative. So we do have a conversation on that. Well, what, what are like some of–  I mean, I, I don’t see anybody shying away from data center use right at this point.  Seems like we’re growing…

[00:13:37] Lily Looi: Well, we’re talking through a data center right now, so it’s in use right here.

[00:13:41] Camille Morhardt: Right. So I mean, what kinds of approaches are different people arguing in favor of, and I don’t know if they’re all mutually exclusive, but some people must be arguing,  “we should be doing A, because this is the, the fastest way to deal with it. Or we should be doing B because long term, this is how we see the evolution of data center and this would be the best approach. So what kinds of approaches are people arguing in favor of?

[00:14:04] Lily Looi: Well, I think most people that I, that I’ve talked to are in favor of reducing our carbon footprint through more efficient use.  I think there’s commonality there. There’s multiple techniques. You can optimize the hardware, you can optimize through software; you can optimize on an individual chip or at the platform level or at the whole data center level. There’s many different ways to optimize, and depending on your point of view, you’ll optimize that piece. But there is a general consensus that we want to make it more efficient when we look at the trends of data center electricity.

Now, on top of that, some companies, if they say they’re carbon neutral, it doesn’t mean they’re not burning any power. It, it means that they’re also putting electricity back into the system through other means, and they’re using a green electricity or buying carbon offsets through either solar or, you know, something like that.

[00:14:57] Camille Morhardt: So with all of the AI that data centers are now processing, are you using AI in optimizing power consumption for them?

[00:15:04] Lily Looi: Oh, sure. One of the best examples is as you get ready to start using these servers that we’ve built out, there’s a lot of different knobs and buttons that you can set to exactly optimize for like a customer’s particular workload or usage.  So one way you could tune and optimize your server is you could do a bunch of extensive analysis with these different knobs and try to find the best settings, but another way is to have AI do that for you. So that is one of the techniques that we use where AI will run some experiments and see what settings have the most impact and continue tuning a server that way.

[00:14:45] Now the second. is probably too much of a prototype to talk about, but we can actually embed AI into algorithms to make them smarter about controlling parts of the CPU.

[00:15:58] Camille Morhardt: Do you know of any kind of common misconceptions in this space? Are, are people understanding this appropriately?

[00:16:06] Lily Looi: I think the data center and the electricity that uses is largely out of sight, out of mind. I mean, you can see your laptop, you can see your phone. You can’t see the data center. You just know you’re logging into the internet and doing something and it’s somewhere far away.

I think most people don’t think about how much processing power is actually in these data centers. I keep going back to the Siri example because that it looks like your phone’s doing all the work, but really it’s also going to the data center and doing some processing there.

[00:16:33] Camille Morhardt: Yeah. And not only that, but they’re running federated learning on all of these kinds of endpoint devices and mobile phones that are optimizing and customizing for you directly on the end device, like the phone.  But then they’re sending their weighted information back to that central model, so everything you’re doing, I mean, they’re processing voice and background noise and all kinds of stuff to improve a   giant model that’s running, right?

[00:16:59] Lily Looi: Yeah, yeah. Processing my voice and deciphering what I said. That’s just one part of AI.  But the other part of AI is training a big model and you’re taking in data and training, and that’s actually very compute intensive.

[00:17:11] Camille Morhardt: What is the biggest driver right now? I would guess AI.

[00:17:15] Lily Looi: AI is a big one, but in terms of usages, like using cloud to, uh, manage their business is big. And I, I don’t know, this may be less of a, a generic cloud thing, but even things like self-driving, the cars rely very much on data center to generate the algorithms–the collecting of data when you’re driving around, and then processing that and training to help make better self-driving algorithms. That’s huge. Uh, I, I forget now, there was some statistic I heard where the amount of data being captured and processed is growing exponentially.  So all of that means more, more processing power just because of the sheer amount of data.

[00:17:59] Camille Morhardt: So what are we gonna do about that? Do we process more locally or again, is that just shifting where the energy consumption is occurring, not solving anything?  Or do we become more efficient in the architecture or do we process less? Do we run some kind of a filter and say like, “I don’t need all of that stuff, processed?

[00:18:18] Lily Looi: Well, running a filter and processing less, that’s processing. So yeah, that is one of the technique, techniques, is to break it down, break it down, and so you’re, you’re processing less.  I don’t think we want to stop the trend of data that’s coming in because it’s, it’s improving quality of life and giving us more advanced usages than, than we ever thought possible. So we don’t wanna change that. We just wanna make sure that as we’re growing the capabilities and the performance, that we are at the same time looking at making it energy efficient.

And it’s not just about performance, but it’s performance at a specific wattage. At least at Intel, we’re baking that into how we’re developing our products, looking at both power and performance and at typical utilizations.

[00:19:02] Camille Morhardt: Running different workloads, like are you optimizing different kinds of hardware too?  Would AI be done differently than some other kind of workload?

[00:19:10] Lily Looi: There’s a fine line between software and hardware, and there are certainly accelerators that you can use that so a specific operation better than general purpose software would. And so in that case, adding an accelerator and putting in a software interface so that you can run, that’s one method of being more efficient for a very specific function that you need to do.

[00:19:33] Camille Morhardt: How do you start to frame this problem? I mean, if you’re looking at power consumption and optimizing performance per watt, essentially across servers that do everything all over the world–I mean all kinds of different workloads. And you don’t necessarily know what workloads they’re gonna run any, any given server, but you have a lot of examples of things that they do end up running. Like how do you even approach this problem?

[00:20:01] Lily Looi: Well, that is one of the big challenges because there could be thousands of different things that people could run. And the best we can do is, well, we do, we’ve run lots of different workloads as part of testing and qualifying our product, but when we wanna design, we pick maybe five to six workloads that are very representative, and we center around that and we use those as our base.

[00:20:25] Camille Morhardt: When you’re doing the power consumption design, how far out are you sort of beginning to lay out that design and how do you have any idea what kind of workloads are gonna be in full force by the time the computer actually hits the market?

[00:20:40] Lily Looi: That’s a tough question. Uh, that’s a difficult challenge. So we do have people plugged into the consortiums that are defining the next generation benchmarks. We try to stay on top of that, but, uh, in terms of benchmarks, it usually takes a while for them to change. So that does give us some runway. In terms of, uh, optimizing.

But yeah, there are always more workloads out there. And even our customers, they run different types of software, and so as part of our getting the product out the door, once they have prototypes, our customers can be running their favorite workloads, which helps us get a bigger breadth of what types of workloads are running.

But, beyond that, we just try to stay on top of it and we’ll swap in a new workload when it’s been a while and we can see there’s something coming, but it’s kind of a slow process.

[00:21:32] Camille Morhardt: I just think of how quickly new use cases come up, especially in the AI space, and I’m wondering, do you ever have to make– do you ever just go out on a limb and have to make kind of a wild, include sort of a wild prediction as one of the workloads that you have no idea, but it seems like possibly it could get very big, very quick?

[00:21:53] Lily Looi: Well, when we’re first developing a product, we can’t run software because it’s just a bunch of code, right? We haven’t turned it into hardware, but what we can do is say, “well, I think the software of the future is going to really push AI in this way, and we can make a little micro, just to test it out and see how it, how it goes.  Now we don’t know how that’s gonna translate into someday some full-blown AI use case, but we do have micros and we have benchmarks that are more limited.

So in the early days when you don’t actually have hardware, this is what you rely on for anything that’s up and coming. Now for the more established workloads that we use, we can collect traces and then run it on a model and predict the performance in the power.

[00:22:37] Camille Morhardt: Okay. So what is your sort of guiding principle as you go through this in your job?

[00:22:45] Lily Looi: So energy efficiency, it’s kind of a boiling the ocean. challenge. So how do we reign this into something that’s actionable? One of the things I’m doing is trying to focus on what we call the golden five. It’s the top measurements and benchmarks that we use to characterize energy efficiency. And it’s somewhat representative of different usages. So we use that as the basis for anytime we wanna push a new feature, make it more efficient, make circuit block, do something better or have the capability of gating off little pieces of itself that aren’t being used at the moment. Everything eventually needs to roll up into these golden five.  Or how is it helping these Golden Five.

[00:23:31] Camille Morhardt: Right. What would you say is the hardest part of your job?

[00:23:35] Lily Looi: Data center has gone a long time with performance being the focus, and it’s only been in the last few years where it’s gotten a lot of attention. Where we passed the 1% global electricity mark, I mean, we, we always had some constraints. You only get so much power per socket or so much power per rack or per server. So we’ve always had those constraints, but it’s only been more recently, like past the 1% mark where it really mattered how much we were burning even in the, the less than 100% utilization points.

[00:24:12] Camille Morhardt: So fame and glory. (laughs)

[00:24:16] Lily Looi: Well, I mean, I drive a Prius. I love going hiking in the great outdoors. I don’t wanna see every, all the rivers dammed up and, you know, I certainly don’t like the wildfires that we’re now getting every year in Oregon, which seems to be a thing now. So anything we can do to reduce the carbon footprint or reduce electricity, of course, is good for the planet.

But on top of that, using less of electricity and being more efficient so that the data centers can run more efficiently and, you know, be more economical. That’s of course important too.

[00:24:51] Camille Morhardt: Very cool. I literally, haha, no pun intended.

[00:24:52] Lily Looi: Yeah (laughs).

[00:24:57] Camille Morhardt: Thank you Lily. Uh, Lily Looi is a Fellow at Intel and in charge of Power Consumption for Xeon, which is Intel’s data center line. Thank you for speaking with me today.

[00:25:09] Lily Looi: Thank you.

More From