Camille Morhardt 00:11
Hi, I’m Camille Morhardt, host of InTechnology podcast. And today I have with me Rob Bruckner, Chief Technology Officer of the Client Computing Group. Welcome to the show.
Rob Bruckner 00:21
Thank you. Great to be here.
Camille Morhardt 00:22
So I wonder if you could actually take a second and define “client.” We said, you’re Chief Technology Officer of Client Computing Group. What is client?
Rob Bruckner 00:31
Client is a term and intended to be what’s with you. Our client products are there to help you do your best work. On the mobile side, it’s with you for your productivity; it’s with you when you’re doing communications with people now. On the desktop side being creator and all your productive work at work, school, home. It is a client is a with you, co- with you as a part of your central best work and creative work you’re doing. So that’s what we call it a client.
Camille Morhardt 01:05
It sounds like you’re also saying you’re kind of making that distinction between like Internet of Things devices, and the sense that it’s a partner to the human is that—
Rob Bruckner 01:14
It’s human, it’s it’s got its own class of form factor. So I have worked in cell phone industry at Intel, and also at Apple before, as well. And it’s a type of device is very personal and consuming content. In a great way. It’s a great form factor for that. But the personal computer, which has been rumored to be gone again and again and again, the way that it works the way you interface with this user interface by having keyboards, mice and other types of accessories, or having your camera, audio processing, as well, it is a really great form factor for doing
productive work and creating work. So brings the best out other people’s productivity and creation and all in all the customers we work with. So yeah, it’s just a it’s a unique form factor, it continues to have staying power, and people continue to innovate off of it.
Camille Morhardt 02:06
What actually what kind of evolution like what are the next sort of form factors? And are we moving toward, like gesture recognition off the device anytime soon?
Rob Bruckner 02:15
Wow. gesture recognition? Yes, I mean, anything’s really possible within we consider kind of our constraints that we want to make sure to make experiences grow off of already great experiences, There’s some times we do want to transform things completely, everybody gets excited about that. A lot of people like their personal computer. And when you bring a new usage like that will make sure you can still do it within the established power envelope for a mobile class device. Or make sure that you’re not bringing too much cost into the device itself, as well. So people love to see the evolution of these things. Models of user experience or models of how a device might work, but they always want to have a really great product at a good price and be able to do a lot of multifunctional topics like this. There’s a lot of new innovation we’ve seen recently on screen technology. So if you look at some of the complications of like having radical form factors, the thickness is an issue with the battery itself. So we want to continue to have great battery technology but it’s not like the batteries are doubling capability
every year, so as you do the batteries to keep the thickness a certain level where they could locate inside the device. And in the display itself, you have a lot of constraints that you have to work around. Some of our partners in the industry are creative things like you know, just a pure screen that you can fold—foldable displays is pretty interesting. I think this is the first part where we’re gonna see foldable displays make impacts in the form factors themselves.
All in ones are awesome. I have all the one at my work office in Arizona. I love that it’s just there and I have a keyboard and mouse and everything’s just working wonderfully and I’m not trying to find the box and plug stuff in it’s all in one spot with a great built-in dock docking technology has been great Thunderbolt brings some really really high bandwidth. So that is an interface to connect into docks and in multiple displays continues to evolve. But yeah innovations continues to happen. It may surprise us of an old device like a PC but it is continues to happen.
I think what will be really interesting in the coming years is really see how the play of AI happens with the with the PC. We’re just in the first stages of bringing AI to market with our Meteor Lake project Ian Intel and there’s other providers out there as well bring in AI first out with Microsoft bringing some collection effects package you can see this happening when you do things like a Team’s call. But other providers also have are using these AI engines to start to do clean up and improvements of your media, your audio, your imaging type of content and then beyond On that, we’re also looking at how it kind of infuses all the things that you’ve experienced on the PC, in OS and software stacks, application developers, our OEMs are super eager to innovate with these type of new technologies. Possibly that brings in some new ways to look at the user experience itself, how that user experience evolves into modulating that form factor into something that may be novel and interesting. So it’s really about what people do. When people find extremely value, and how do they become as productive as possible, or really being the creator for it, you start to see that evolution start and AI has had a kickstart that once again, we’re gonna see another big low innovation kicking in with AI.
Camille Morhardt 05:41
Yeah. Can you actually talk a little bit more about that, like I’m interested in with, I think, a lot of AI evolution to date, we think of the cloud right? And and giant servers and large language models and central models. And if you’re saying that it’s coming to the client, then what kind of possibilities does that open up? Are we talking about distributed inferencing? Or, you know, what kinds of things are we looking at moving forward?
Rob Bruckner 06:06
Well, there’s gonna be all kinds of things. And we I can tell you, first of all, AI has been on the client for a while it’s been in the CPU. So when you look at matrix math, which is part of how AI is functioning, these multiplying accumulate functions our CPUs and other CPU providers have the ability to do matrix math and do some basic levels or have some sometimes complicated levels of AI functions; it’s just the total operations per second are not the same level as you can have with a GPU, or a dedicated neural processing unit.
What’s happened now is if you look at the evolution of what I’ve seen, personally, the cell phones built in neural processing dedicated neural processing accelerators for some time now. And you would see things like you could have thought about different ways to deal with security for example, and face recognition, camera image processing, real-time pipelines of new bring the pixels through is starting to create more advanced ways to process that image real time versus kind of storing stuff off and then getting to it later, you know. You do photo editing you got to 200 photos that you later bring in and then do processing so cell phones and mobile devices like this really started out this this big kick toward the accelerators in a product and they really did focus on like I mentioned earlier speech processing, listening to you to see if they like a “Hey Siri” kind of moment for kind of like a an Apple type device, but also imaging and so this was contained as more like a media as booster like a media booster agent and AI have been in more deep learning type networks have were perfectly suited to do this in a very efficient ways a very efficient and power perspective and doing lots of matrix math in a quick way. Part of our initial and some of the initial bring forward in the client itself is to bring that in to all the client products have been industry, I like to think of it as more this is a really bring a new level of communications to a PC, as well. We saw during COVID and even post-COVID The the PC being a video conferencing device of choice, it has a great user experience for bringing in that the video itself and also being able to productively do other things while people are talking, let’s say. You know, it’s a PC, so you’re busy multitasking. So you’re sometimes I know never happens before listening now to me, of course, but you might be reading email, you might be doing some doc, you may be working on a presentation, you might be doing some image editing, you might be setting up your next Spotify queue while you’re on a video conference. So what’s cool about the bringing in these media effects is that the PC becomes like a really more advanced communication device. Like you note, you kind of take this for granted from your phone, it just works flawlessly now. So bringing this kind of capabilities into the client itself, because it’s such a now prominent communication device for everybody in the video conference
era.
Now you evolved that forward and there’s a whole other level of acceleration happening between the CPU, GPUs, DSPs, neural processing units on a client device becomes really interesting because it is that personal device that you have with you it has a different level of privacy versus what’s going up and down the cloud. The security can be better contained, as well as lower latency. On a client device, all of these are more optimized and that personal client usage models. This I think is going to bring forward, something unique versus data center. We’re very heavy on inference, and the site of like utilizing all that months and months and months and months and large numbers of dollars for training your models and optimize your models and bring them into the user space—those devices you’re using every single day — and reduce them in a spot where application developers, os partners can now bring these to bring you new user experiences themselves. So the world of inferencing is really about the users and how we’re making our lives better, more productive versus training is more about creating those models you want to bring forward. So there’s a pretty interesting, distinguishing difference.
Now the question we have, that we’re all evaluating the industry right now is how much can you put on a client, which can you afford to put on a piece of silicon, and also handle battery? Things like memory bandwidth, we typically have two channels of memory on a high volume class of PC products. Some of the more advanced language models, transformer generative AI requires a lot of memory bandwidth and bring memory bandwidth into a client requires a lot more money to spend on memory itself, is it worth it? Yeah, there’s going to be products that we have, they’re going to bring more memory bandwidth. But there are some distinguishing things that you can and can’t do super well on the client. And we’re continuing to evolve the IP, and technology, see how far we can take that. In other places on desktops. And if you look at developers and creators, they may do some actual training on the actual desktop, you have a large number of P tops, which is, you know, potential tops not efficient, we can actually use them for on a graphics card. Nvidia has been doing this for a long time, they are really in the lead here with what they do CUDA, for example, and whether the other graphics companies, including Intel, including AMD, are also bringing forward very big AI capability utilizing these graphics solutions. So there’s no reason why you can’t do some of that training at a smaller scale level.
So what Intel, for example, is looking at and all the other PC providers and client providers has how to look at the stack a little more holistically, so that they use a developer thinking through, “okay, if I want to train something initially, I’m going to put it on that, that the cloud, I’m going to spend the money on that, and I gotta reduce it, and how do I start to bring that forward in a common way done in the software aspects, software frameworks, tools.” And hopefully, they make it more seamless for you to utilize all this resets from server down to the cloud to the edge and have a more seamless experience, you can predict this instead of using fragmented tool frameworks. So AI is so new, exciting that fragmentation has happened already. Right? So there’s a lot of “which tool should I use?” And we and others are trying to bring some sanity to that as well. So we can get those user experiences built in.
Camille Morhardt 12:33
Can you talk a little bit about like privacy and security and how that’s different? Or where you think it
might be changing while AI comes to client a little bit more enthusiastically?
Rob Bruckner 12:45
A good bit, and I think what’s we’re all pioneering, what to do here and users vary. Like, my view of privacy, obviously, is very high, given I don’t have a social media preference, for example; others, you know, privacy is just, yeah, they like to have the very public persona, right. And one of the things we’re you know, you look at AI at a client class device, there’s different aspects of privacy and security. One might be the security of a model. So if you have a developer or an application provider that spends lots of lots of money, renting out servers, for many, many months, to create this really IP-level model itself, and you start to reduce it and bring it into client for inference type work. How do you secure the model, such that the model itself is intellectual property and doesn’t get exposed with the actual device—that is your money you spend to develop, it’s your right to actually keep that. But there’s also big open source
market for models, you see that this happened with like ChatGPT, Llama, things will emerge as a new novel usage. And then you’ve got this awesome open ecosystem of partners that start to mimic it and try to reduce it in a different way. So I think you’ll see both of these dynamics happen.
But we’re always going to have a model where the security of somebody putting that investment in for something unique and special can be locked down using hardware level security for model class of protection; that’s difficult models can be very big. So we’re looking at different technologies right now, for example, to do this another level of security is actually utilizing the advanced capabilities of AI to bring more security to your product was intrusion detection or some other things we’re doing with our security partners in the application world or the OEMs themselves. So you can get a more advanced techniques to determine whether or not you have a safe and secure system and using the neural engine that we have in our product.
Privacy is going to be really foundationally interesting. And what I mean by this is that now there’s plenty of discussion papers interest in universities as well in work in the software field of digital twinning. So if you look at inference itself, and you’re receiving inference by doing different things with creator-producing/school, whatever you’re doing, you’re utilizing sort of something helping you with your inference, well, one interesting model is trying to understand how you might have might infer you, what are you doing? What is your next action, potentially? How can you expose to the user? What might be more productive for them versus hunting for where things or like, I have to go search by PC for files? What if the OS or the software developers bought something that started to learn more about me? Well, in essence, they’re basically twinning me to some degree. So if they’re twinning me what do I do with that? locking down that the privacy of user twin.
Camille Morhardt 15:55
Who wants the twin? My company or me?
Rob Bruckner 16:00
You, your company. I made a joke with some partners a couple weeks ago about well, there’s also people who have probably like a marketplace for their persona. You don’t know that there’s a different variety of people that want to lock things down for different reasons. There’s other people that were like, “you know, hey, I’m Robin was super productive worker, maybe I want to use Rob’s twin to help train how I can be more productive or not?” So I think there’s going to be a pretty wide dynamic how people will actually absorb this, but we have to be prepared and ready, that as something’s being developed, it’s happening to learn more about you that is considered kind of your digital DNA or footprint. And we’ll make sure that we’re aware of this privacy laws throughout the world. So another wide open field. I love this part about AI, for example, because on a technology curve, this is like all new is happening, super accelerated pace. So a lot of invention and innovation is happening right now in this field. And security and privacy are very big focus areas for all of us.
Camille Morhardt 16:57
Yeah. Okay. Can you talk to me about sustainability and the PC? I’ve heard, you know, different elements from like, modular to second life to looking at battery optimization? What is where’s the industry headed? Or what are they worrying about or thinking about?
Rob Bruckner 17:12
Oh, sure, you’ve picked a number of them already. I mean, for many decades that I’ve been involved in the PC industry to sort of continue to work on the materials themselves energy use to create the products themselves, certainly, battery life is useful for all of us, but also great for the planet and reducing power of the products. It’s not just for mobile products; desktop products are also very important to keep the power in state. So very big initiative on power, both for client and also server. Server right now is total cost of ownership and the energy costs are so high, that we see a completely new pivot for where the optimization points are for power efficiency, and bringing as many cores as possible within a power envelope versus more an uncontrolled single thread, very high powers. So it’s really changed how we’ve been thinking about our IP design points to bring battery life not just because it’s better for you because you don’t like to charge, but actually helps reduce the amount of energy we have to consume on each of the charge cycles, and trump the lifecycle of that product.
Partnering with our OEMs on modular modular designs, of course, we have concepts that we do at Intel, I’m sure partners are doing this as well with OEMs. And I’m sure my competitors are as well. Modular designs are interesting, because you have that the dynamic of sustainability and of all of us are looking at refresh cycles. Can people do like a new PC because it does get better? When I first entered the PC industry way back in, you get a new PC? It’s like twice as good. Yeah, everything was twice as good. So the pace of turnover was extremely fast. Now we have things like AI emerging in a PC, how would you bring AI into an older product without having somebody throw away that product, or hopefully, you return it back and somebody more needing that type of product can use it as well. So there’s there’s types of programs that we have, I know OEMs are doing where they can recycle a PC, not just pure in the recycling of the materials, but also bring another user to bear on something a lower price point while this new user can bring another product in. Modularity could bring you the ability to upgrade a system partially but not fully. And there are some pretty good concepts I’ve seen from some of my OEM partners that are really interesting and compelling. They are tricky to make. And there’s
sacrifices you make as a user for things like the thinness of a product. And you don’t get all the new technology, but it’s good enough for you. So it’s also the minds of the user and what they’re buying and poor to say they put on the sustainability aspect.
Where I think we’re looking at as well, at least at Intel—and I know partners are looking at this also—is how do you continue to keep the user experience alive. And typically, you might a lot of hardware focuscompanies like Intel, and you know, some of the business work I do, people are very hardware-centric. So they want to upgrade, upgrade, upgrade and bring new hardware into a PC software is such an important factor. And the user experience itself is dominated by the software itself. So client is starting to really see both on all the SOC providers, the OEMs, and operating system, the interplay of how these all work together to keep the user experience continually to improve. Instead of these big update cycles. And many PC users, including myself, can get frustrated by how a system just slowly decays in its capabilities, stability, it’s sluggish, the battery life gets worse, these different things happens, you can
either load things over and over again. And how do you bring this more alive from having the user experience maintained throughout the lifecycle of that product is something we’re all looking at going to carefully, that’ll help the sustainability as well.
Certainly, businesses like refreshes, because they get the new business from that, but having the user experience and keeping the user with you because they enjoy that experience, and value in that and finding a way to monetize that versus new hardware, something we’re all seeking how to do better. If I’m just not updating you forever and give you free software forever, I can’t run a business. So I think there’s going to be some of the hopefully some new business and economics that emerge from this.
Camille Morhardt 21:33
So two final questions. Do you have a book you’ve read recently that you recommend? You said, you’re an avid reader?
Rob Bruckner 21:42
Well, besides the AI books I’m reading, which might bore a lot of people I don’t know, I tend to go back and read some similar things over and over again. Sand County Almanac is one I really like to read. It’s about a person with a farm up in the Midwest and some of the aspects of living close and with the land. One of my favorite books I read up pretty often. So that was a most recent and just finish that one again.
Camille Morhardt 22:11
And I was also wondering, you mentioned you garden, so like what’s your favorite vegetable or flower to garden?
Rob Bruckner 22:17
Tomatoes by far. During COVID, I was in Florida. So I grew up in Florida, near the Space Center. This is all information now becoming public and somebody to do a virtual persona of me online or something. So I grew up near the space center, my parents are still there. And during COVID I spend time they grew up in a two and a half acre lot with them, they always had a garden they grew up in Mississippi. So we knew farming. So we knew we were always doing you know, being involved in a garden growing up, so I just fell in love with it, as well. So I started to get back in the garden, got all the weeds out of there and re-provisioned the plot. Let’s see I’ve been growing heirloom tomatoes, beets, carrots, and okra’s another favorite. I grew some brussels sprouts that was really cool to see that planet it’s just amazing to see little seeds, nourish and sprout and grow and then in harvest and then recycle that plant back into the soil. Beans, all the different types of beans. So I try experiment I like to large stuff.
Learning things are just to me is one of the most important things I find in life. Then gardening brought another nuance to that was just trying things. So when work, try it again, you know, or just a soil take measure this whole figure out what’s going on with the soil drill watering schemes. And so I’m a very inquisitive person. And so gardening was something that I enjoyed doing and get to enjoy the fruits of that through great food. Great healthy fresh food. So yeah, that’s one of my hobbies I really enjoyed.
Camille Morhardt 23:56
Well, thank you for taking a few moments to talk with us Rob. I appreciate it.
Rob Bruckner 23:59
You’re most welcome.