Skip to content
InTechnology Podcast

#90- What That Means with Camille: Ambient (Ubiquitous) Compute

In this episode of Cyber Security Inside What That Means, Camille takes a deep dive into ambient compute (also known as ubiquitous compute) with Moh Haghighat, Intel Fellow. The conversation covers:

  • How we are moving into an era of ambient computing, and technologies are emerging to transition us into that era.
  • What ambient computing might look like in everyday life, such as in a mall, at a traffic light, or an evening at home.
  • How privacy plays a role in ambient computing, and what needs to happen to make sure people are protected.
  • What the long term goals are of ambient computing and how it will adjust our daily lives.

And more. Don’t miss it!

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

 

Here are some key takeaways:

  • There are three main eras of computing: the first is the mainframe era. That was where you interacted with a computer using punch cards or an unintelligent terminal.
  • The next era was the PC, or personal computing era. In that era came smartphones, the cloud, and your own personal computers.
  • We are now in the early stages of the ambient era, where computing interacts with the ambient. Instead of interacting with a PC or your phone, you are interacting with the ambient.
  • Transitions between these eras are helped along by technologies rooted in the current era with elements of what is to come. For example, the internet started to hint about the cloud, in that you don’t have to store everything on your PC itself.
  • The transitional technologies transitioning us into the ambient era are things like the cloud, Internet of Things, and more. The direction we are headed is that you will no longer need to instruct a piece of technology to get something to happen. 
  • For example, if you wanted to go speak at a conference, and you needed to go to the store, an intelligent store would know your intention and what you would need. Specific things would then be advertised to you. Perhaps you like to minimize cost, or wear specific clothes – it would know.
  • Privacy is, of course, a major concern with ambient computing. The ambient should be able to support the level of privacy you want to uphold. There would be a mechanism to express your privacy needs, and things will happen for you.
  • For a comparison today, think about searching on Google. Right now, you don’t even have to finish searching – Google will often predict what your query will be. Imagine that with ambient computing. If you want something, the search will be there, but it will also predict what you want in advance.
  • Something that is very important is interoperability (devices working collaboratively together). The AI will need to work together across different devices. Discoverability is also important, while remembering privacy, so that ambient technology can work in public spaces as well.
  • For ease, devices would need to interact with each other without having a pre-prescribed program to do so. Because you move your computing with you, it will be important for devices to provide information about themselves and to take in information about other devices so they can interact.
  • Part of how this could work initially is carrying around some sort of beacon that constantly broadcasts a URL. You can decide what is on that webpage that is visible to others. Of course, how you access the rest of your information is part of the privacy that surrounds all of this.
  • Once you have your device near a beacon, it can communicate with it. Say a dress in a store has one of these beacons, and it has the price and other information about it. Your device can communicate with it without you ever being involved.
  • From there, the entire mall is broadcasting via these beacons, so can my artificial intelligence then optimize what I actually see based on what it knows about me?
  • We have a long way to go to get there, but the transitional technologies have begun and ambient technology is becoming the main type of computation. It is the direction we are headed. The protocols and privacy concerns are being actively discussed and communicated about as the technology is developing.

 

Some interesting quotes from today’s episode:

“Technologies from eras could co-exist; in the PC era, we still had mainframes and we still actually have it now. And now in the ambient computing era, we would be having PCs, and we still might have main computers. But the dominant form of computing is changing.” – Moh Haghighat

“In the ambient computing era, user interface is going to be primarily AI – artificial intelligence. Ambient would be intelligent; it would know about you and there will be a lot of preparatory things that ambient could do on your behalf.” – Moh Haghighat

“The ambient knows about you, about your profile, about what you desire; when to turn the light on, when to play music for you, et cetera. And you will have the option of configuring and setting things the way you want, but in a natural way.” – Moh Haghighat

“The form of a UI is basically advancing, whether it would be through something you would get on your phone, or on your screen, or on your wall, or on your smart glass. They are all possible, but technologies have to be developed, and the best solution will be the one that survives and thrives.” – Moh Haghighat

“It is inconceivable that one particular vendor can own all the devices in the world. So the devices have to be able to work with each other, they have to be interoperable, their properties and capabilities have to be discoverable. The same way that basically a search engine could go and look at a page and figure out what is in it, your devices should be able to look around and find the information and services that are in the ambient.” – Moh Haghighat

“In the ambient era, you are moving your computing with you, and the way you will be interacting with an intelligent ambient depends on what is surrounding you, what is available to you. You may have your phone with you or not, or a display might be available to you or not. So dynamic customizable for user intelligent information would be flowing around in that era.” – Moh Haghighat

“Devices and gadgets are all consumers of this information and producers of the information. And of course for that, one needs to establish a secure and private mechanism.” – Moh Haghighat

“The main notion there is having the ability of controlling the ownership and revenue out of the data, which is valuable – that is by itself a whole discussion.” – Moh Haghighat

“The big deal, I think, is that it is ambient that is becoming smart, that it’s becoming intelligent. It is actually a learning thing. And it records things about me, it knows things about me, it anticipates on my behalf. And eventually we’ll get there.” – Moh Haghighat

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

Intro: Welcome to What That Means with Camille Morhardt, companion episodes to the Cyber Security Inside podcast. In this series, Camille asks top technical experts to explain in plain English, commonly used terms in their field, then dives deeper, giving you insights into the hottest topics and arguments they face. Get the definition directly from those who are defining it. Now, here is Camille Morhardt.

[00:00:36] Camille Morhardt: Welcome to today’s episode of Cyber Security Inside, we have Moh Haghighat with us. He’s a Fellow at Intel responsible for all of Intel’s Web architecture as well as software optimization around the web. We are going to talk with him about ambient compute. So welcome to the show Moh, it is great to have you here.

[00:00:58] Moh Haghighat: Thank you so much for having me, really a privilege. 

[00:01:02] Camille Morhardt: We want to start What That Means by having you define what is ambient compute?  Could you tell us in a couple of minutes what it is, and then we’ll go into the history of it. 

[00:01:12] Moh Haghighat: Ambient computing basically refers to this notion that computing is done by the ambient, essentially computing has faded in the ambient. And instead of interacting with a device like your PC or your phone, you’re actually interacting with the ambient. Ambient has become your new computer. 

If we look at the history of computing we can recognize three distinct major eras. We had the mainframe era in the 50s, 60s and even part of seventies when using a computer meant using a mainframe; a large computer center with large equipment. And you either interacted with the computer through writing programs using punch cards or you had a sort of dumb terminal that was connected to the computer. That era then resulted in what became PC era or personal computing era. Then you had your own personal computer. And in the PC era came technologies like mobile/smart phones, and cloud, et cetera. And now we are at the earliest stages of ambient computing era, where the notion of computing becomes essentually interacting with the ambient. 

Now one should note that technologies from these eras could co-exist; that is in the PC era, we still had mainframes and we still actually have it now. And now in the ambient computing era, we would be having PCs and we still might have main computers, but the dominant form of computing is changing.

So transition from these eras are facilitated by some transitional technologies that you can think of as forces that transition you from one era to another one. They are basically technologies that are deeply rooted in the current era, but they have a flavor and give you a flavor of what is going to come – like in the mainframe era came the notion of smaller computers that one could have in a small university. But it was inconceivable, a former CEO of Digital is known for having said, “who wants to have a computer on their desk?” (Camille laughs) Then came the PC era technologies like the web, which basically meant you do not have to have everything on your PC. 

Then Cloud came, which meant you don’t even have to own part of the computer that have delivered your experience; it is somewhere you don’t even have to know where it is. It is in the cloud or mobile can, which is basically you can move your computer, you don’t have to have it stationary on your desk. And so cloud, mobile, web, IOT these are these transitional technologies that are transitioning us from the personal computing era, personal device era to ambient era. 

[00:04:44] Camille Morhardt: I just want to make sure that everybody has a really good understanding of where we’re headed with ambient compute. So I get that with a device that sits in the kitchen or on the hearth or something at home, you can say, “Hey, you device”–be it Google or Alexa–and you can ask it to play music or maybe connect with a thermostat, “Hey, raise the temperature two degrees”. But I still have to ask it to do something. I’m still calling it to attention and then requesting something. So can you walk me through what would an evening at home look like? Or what would a walk down the sidewalk look like in a truly ambient era before we get into what’s required technologically to make it happen.

[00:05:30] Moh Haghighat: In the ambient computing era UI (user interface) is going to be primarily AI–artificial intelligence. Ambient would be intelligent; it would know about you and there will be a lot of preparatory things that ambient could do on your behalf. So these things will happen when in the evening you come home, the garage door knows you when you come in and assuming that you will be driving yourself. Well autonomous driving is part of ambient computing, actually.  So the ambient knows about you about your profile, about what you desire, when to turn the light on, when to play music for you, et cetera. And you will have the option of configuring and setting things the way you want, but in a natural way. 

A lot of these things can be basically discovered, I can give you an example, say you want to go to a conference to give a talk; then when you go to a store, the intelligence store would know about your intention. And then the ambient would know what you’re looking for and the things would basically advertise themselves to you that you are going to this conference, it’s a formal thing, you will need this kind of clothes and how you want to minimize the costs, et cetera. And this thing is telling you, “Hey, I meet your requirement and your intention.” But for this to happen, the intent has to be able to be captured and predicted and facilitating things to happen.

And a major concern here would be privacy. People have different levels of privacy preferences and expectation, and yet ambient should be able to support them. And then you need mechanism for expressing your level of comfort in sharing your private information and security of that, et cetera. And so you don’t even have to do things. Things will magically happen for you. 

[00:07:48] Camille Morhardt: I want to have a little bit more precision on that. I do want us to talk about privacy as like an entire conversation as part of this. But you know, it’s one thing I suppose, let’s say I give all the permissions and AI is learning that when I have a conference, I like to go shopping three days before it and I buy a new blouse. So is it like a proactive alert “Hey, it’s three days before; and here’s four blouses that we picked out based on your prior shopping trips, and it looks like you’re speaking at the conference, so we’re going to get you a blazer top, too.” Does it present that in front of me without me knowing it. And where does that even present itself ? On a screen somewhere? 

[00:08:31] Moh Haghighat: Right. So all these things are possible, but at the same time you can see that there is an enormous overwhelming amount of information that you could be bombarded, therefore there has to be mechanisms and   services that basically filter and provide you the information.

To make an analogy with today, when you do a Google search, you type some key words, and there are enormously large places that have relevant information about what you just searched. But the search technology now has reached a point that what the search engines find and give you on top of these leads is enormously large and very helpful. You don’t even have to finish your query. They can basically predict what you’re going to search and help you with that and give you that information. Think about that in the ambient era. You want something ; in that era, search will be there. The concept of a search and matching what you intend, but it’s not going to be of this form. It could be like your autonomous car knows exactly what you want. And your question of how do you know about these things? Again, the form of a UI is basically advancing whether it would be through something you would get on your phone, on your screen, on your wall, or on your smart glass. They all are possible; but technologies have to be developed and the best solution would be the one that survives and thrives and goes forward.

But one thing, which is for sure is interoperability connection of these devices together. This is the notion of ambient competing, that things have to work with each other in a collaborative cooperative fashion and going forward now.   I think when you come to AI, this notion of collaborative AI or cooperative AI. Where things that know things about you, they can collaboratively provide you what you want. 

This is a long term vision of what is going to come in ambient computing. I think to get there you need this kind of transitional technologies and intermediate steps to be able to get there. 

[00:11:05] Camille Morhardt: I can sort of understand this in my own home or in my own car or using collaborative devices or sensors that I’ve surrounded myself with. What happens when I enter a public space or a commons space? How are the ambient sensors and collaborative AI functioning there? Is it like for the common good or how are those decisions even being made? 

[00:11:30] Moh Haghighat: To address this question, I would like to first talk about something which you know is well understood, and that is basically a search on the web. Now, everybody uses that and it’s it’s well understood at least from the user point of view. The thing that make that possible–that web search technologies–is the openness of HTML that is on the web pages. When you go and put that page up and it’s on a server suddenly this page is visible by essentially the entire world. You do not have to do anything special about it. And that is thanks to the search engine and the crawler. They can come and find you and your content and the connection that you have with other pages, the links that you have there, and the other links that other pages create for you and you become basically searchable.

Now, if we go to ambient era, we really need to be able to have something like that, that first of all means two major things that are still missing: interoperability and discoverability. Now, again, to make that analogy in the seventies and early eighties, internet existed network protocol existed, you could connect computers and you could FTP to a particular machine if you knew that IP address, Unix had distributed files, et cetera. But there was no universal language for the content. And that is what HTML essentially solved is like a universal language for describing the content. 

Now it is inconceivable that one particular vendor can own all the devices in the world. So the devices have to be able to work with each other, they have to be interoperable, their properties and capabilities have to be discoverable; the same way that basically a search engine could go and look at a page and figure out what is in it, your devices should be able to look around and find the information and services that are in the ambient. 

Today you can hardwire code that two devices work with each other, but as you move around in the ambient you want your intelligent devices can work with other things spontaneously without any pre described program to do that. To do that, things have to be able to provide information about themselves. What are their properties, what are the values and the action they provide. We need to get to the point of web of things where things would be interoperable and it would be discoverable. These two requirements are foundation of all these things that I described that say you are driving your intelligent car can query things for you. It has learned about you. It can query the things surrounding you and discover things that match your intent and deliver you in the form that is available to you.

Now, in the ambient era you are moving your computing with you and the way you will be interacting with an intelligent ambient depends on what is surrounding you, what is available to you. You may have your phone with you or not, or a display might be available to you or not.  So dynamic customizable for user intelligent information would be flowing around in that era .  

And devices and gadgets they are all consumers of this information and producers of the information. And of course for that one needs to establish a secure and private mechanism. This area of web3 is all about that. And you having also a control over your information as opposed to the web right now, which that part is broken and you basically do control over that. 

So essentially the technology that has brought us here, building up on that and the things that we have learned are taking us to the next era. And there is such major changes happening to meet these requirements in this software architecture area, like the notion of containers and lightweight containers, fine grain, basically containers, movable computing that things can move around. And similarly on the architecture side, accelerators for these primitives, they are all happening as we speak. 

[00:16:21] Camille Morhardt: And some of that is out there to address the different kinds of when you say a device becomes discoverable and then it announces itself as, “Hey, I function in real time,” for example, or I don’t function in real time, or I have X amount of processing power. So I could potentially process a certain amount of something right here on the edge. Versus I take all my information, I send it to the cloud for processing and it comes back. Are those the types of things that would be discoverable? 

[00:16:53] Moh Haghighat: Exactly. Essentially you can think of the ability to process the computing itself, becoming a thing and a service that it would basically say, “I am capable of doing this with so much delay.” And you just would say, “okay, I need somebody to execute that.” And I think it will take us to compute markets like compute auction, and that would facilitate this essentially. And then the notion of multicloud that on the cloud side is happening right now in the utility area. You see the energy that we have here dynamically is purchased from who, god knows, from where? Canada, Michigan, et cetera, things like that. It is happening in energy. Fueling us, without us knowing anything about that. 

If you want to make that analogy in the past, you’d have to have your own generator, to get that and now it is coming to you. That’s why in the past the notion of ambient computing or part of it was called utility computing, or later it became like pervasive computing or ubiquitous computing.

But I think these other ones, all capture one aspect of that. Like ubiquitous computing means it is everywhere or pervasive. The same thing and utility is focusing, emphasizing one aspect of it. But the notion of ambient computing is a really comprehensive one, which basically says, Hey, this is the main thing that is happening is ambient is becoming our computer and all these devices are supporting that and services that are behind it, the backbone of computing. 

[00:18:44] Camille Morhardt: I want to go back to this other notion of going to the commons or being in an ambient environment that’s a public place. Can you talk to us about both the issue of privacy around discoverability of me and my private information in that environment. Who owns the information? or even who has custody of information or access to information that’s gathered in those kinds of environments? And then also, how is that information being used? In a collective, just to give one example could just be like an intersection. Another one could be the temperature in an office building, and we all want a different temperature. How does it decide? So can you get into some of those other kinds of questions? 

[00:19:32] Moh Haghighat: First of all, how you can sort of collect this information and interact with this information. I want to point to just one technology that Google was doing, which is particularly good. They had that this project I think is ongoing it’s called Google Physical Web. Basically about very cheap beacons, a couple of dollars that you can buy and every second it broadcasts a URL. That’s the only thing that it does; it broadcasts a URL. Now you can get this beacon and tie it to your dog.  But that simple notion of a URL enables you to capture enormous unlimited semantics in that URL, on that page; then you can have information about these things. Of course how you access that privacy part of that, the security part they can all be controlled. But here you basically see how suddenly things that are not smart can become smart and even intelligent by just this notion of oh, “okay I have a way of telling you about myself through a URL. And that then I’m driving and I go to the mall and this dress has this beacon that is broadcasting it’s information. My smart device, whatever that is, can communicate with that beacon, it knows about the price and everything. Without me even being involved, it is just doing all the negotiation. 

But the point is how do you control this private data and who owns that, et cetera? These are all the main core topics of web3 that is essentially under development. And of course, like all the great, exciting technology, there’s a lot of hype around it, but in main notion there is having the ability of controlling the ownership and revenue out of the data, which is valuable that is by itself a whole discussion. 

But I just gave you an example, if I’m going to mall and let’s assume the mall is like today’s mall, all these things are broadcasting. Can you optimize that so that I get things that are close to me and not far or when I’m looking at them, I see that not when I don’t, or prioritize the way these things are processed technology for that like beam forming technologies-

[00:22:05] Camille Morhardt: When you say beam forming, are you referencing 5G? 

[00:22:11] Moh Haghighat: No, beam forming is basically saying when you have multiple sources of signal and Echo device/Alexa is there and you’re talking, TV’s on, kids are talking, dog is barking, and you want to say something, all these noises are there. How can Alexa recognize what is the main noise? So beam forming is the technology that’s strengthens the main signal and filters the other one by using multiple microphones. Echo could be like six microphones and they use beam forming. They do a really good job of finding what is the main signal.

This is just one example and the same thing existed with radar and other domains. But these are the basic core technologies that would be required to solve the problem that you were sort of hinting at which is overwhelming amount of information, how do you find the right one? 

And privacy, again, I think web3 discussion is around that blockchain, web3, and a software part of that is hold this notion of decentralized compute, decentralized architecture and so on. So a lot needs to happen to get there. But these things have started these transitional technologies, we see them right now. And I think if you want to summarize the whole thing is just this notion of ambient is becoming the main paradigm type of computation. And again, humans are consuming that and generating some of that. Even machines are consuming that. 

[00:23:55] Camille Morhardt: So I think you’re saying that things like discoverability, or an individual desire to not be discoverable, even in a sort of public environment, or like right now I can leave my cell phone at home and I sort of feel undiscoverable–except I guess now we have cameras on intersections and on a lot of city streets depending where you are in the world. But is there a way to be undiscoverable, that as well as where the data resides, or who has access to it, especially when you’re looking at kind of a public service? Like an intersection or something like that, where maybe every car has already a map on saying where the person’s going. Well, that’s very personal, private information or who the person is. And they would have the ability to collect that; however, that shouldn’t be of interest, which would be of interest, is making sure the cars don’t collide. I think what you’re saying is a lot of these things like discoverability, privacy, and then filtering the data or prioritizing what’s going to then happen in an environment that’s shared among multiple people, all of those are under discussion now there’s not like a single protocol that we all have already agreed upon in those spaces. 

[00:25:17] Moh Haghighat: Exactly. And now, for example, privacy, in fact, there is good research  at Carnegie Mellon, it was the concept of a personalized privacy assistant. That is, I want to be able to describe my privacy preferences. Just say, “okay, I’m comfortable with things seeing me or not.” And when I interact, or my devices interact in the ambient era with other things, this maybe used as my preference in protocol of interaction with them. If a thing that is a smart, and if I want to use it, it would require my photo to be taken and I am not comfortable with that, that negotiation will happen automatically. And, essentially I am not going to get all the information that that device provides. It has to adhere to my preference, but to get there, we need to have standards. And standards, We need to have demonstration of technology. This web of things that I mentioned, there is actually a section completely on privacy part. So it is like how you describe your privacy preferences. And so with this project with Carnegie Mellon, they came up with this notion of privacy label–like nutrition labels. So you buy a cracker or a can of soda, it tells you about sodium, calories. 

You want something like that for an IOT device or for a service in general. And again, it is not just for human interaction that I go on and read the label. It is for my smart device interaction–ambient interaction with each other, that there must be this standard that basically would communicate and agree seamlessly on my behalf.

We are at kind of at the beginning of discussion around these things, prototyping, demonstrating, and ultimately standardizing. And I have been so high and optimistic about it is it is a Worldwide Web Consortium standard discussion. It is not owned by any particular vendor. It is basically the same organization that essentially standardized the web, made everything working with each other.

[00:27:50] Camille Morhardt: I expect there’s going to be hierarchies of security as well among devices? So you’re not going to let the traffic light  connect with some far less secure game that somebody has maybe on their phone or something.

[00:28:04] Moh Haghighat: All these areas are in development and in discussion. But to again, just summarize the whole thing, what is the big deal? The big deal, I think is it is ambient that is becoming smart, that’s becoming intelligent. It actually is a learning thing. And it records things about me, it knows things about me, it anticipates on my behalf and eventually we’ll get there. Of course, like anything else trial and error and problems but we are headed in that direction.

[00:28:40] Camille Morhardt: My guest today is Moh Haghighat, he’s a Fellow at Intel in charge of web architecture and software optimization. Thank you so much Moh for coming on the show. 

[00:28:51] Moh Haghighat: Really great to be with you.

More From