Skip to content
InTechnology Podcast

#18 – What That Means with Camille: Orchestration at the Edge

In this episode of What That Means, Camille’s tackling the big and broad topic of orchestration at the edge with guest Abdul Bailey, a Principal Engineer with Intel’s Internet of Things Group.

 

Their convo touches on things like:

•  What is included in the concept of orchestration

•  What the ‘edge’ is

•  How and why the security model changes with orchestration at the edge

•  ATM’s POS, programmable logic controllers, windmills, oil rigs, and other uses

•  Network connectivity issues

•  Workload prioritization

•  The Cloud Native Computing Foundation

•  Machine learning

•  Computer vision

 

And more. Don’t miss it!

 

Here are some key take-aways:

•  There’s a complexity in an orchestration at the edge conversation that you won’t find in a data center style conversation because the security model is different.

•  Orchestration at the edge requires greater security because everything isn’t protected behind a wall. Some resources are out in the real world, away from those secure data centers.

•  A higher level of intelligence, reaction time, and redundancy needs to be built in with orchestration at the edge. So that when a ‘parent’ device fails, another device in the area can take over that role immediately.

•  Orchestration requires something that describes the workload. But it’s the tools that take over and get that work done.

•  You can’t apply the same security that’s used at the data center to the edge. You have to look at the differences and identify what needs to change.

•  The edge orchestration software space is projected to grow to a $513 million worldwide market opportunity by 2023.

 

Some interesting quotes from today’s episode:

“Orchestration is everything really. It’s the culmination of bringing together the compute, the networking, the storage, the software, the services, everything together, such that it can support that dynamic environment, where you can take workloads that have been containerized, and maybe the micro-services associated with those workloads, which have been containerized, and have the ability to distribute them across the environment.”

 

“The security model changes when you talk about orchestration at the edge. Because you’ve got the cars in the data center, you’ve got everything behind a wall, guarded, and there’s plenty of security. But now you’re talking about ATM’s that are sitting out in your local store. You’ve got digital displays that are sitting at the airport. So your security model has now changed.”

 

“You definitely have workloads that need to be done. But you need to have those workloads constructed in such a way that they’re one, containerized — meaning that all of the resources needed to execute that workload are in that container and you don’t have a heavy dependency on a bunch of patches to the operating system to make it work. And once you’ve got things containerized, you want to be able to have that flexibility to understand what those resources are at that edge, and then determine where to send them to be executed.”

 

“We talked about a windmill or an oil and gas rig that’s sitting out in the middle of nowhere. If network connectivity goes down, do you want that workload—that analytics workload — to stop working just because it can’t talk to something? No. You want it to be intelligent enough so that the windmills in the area or the oil rigs in the area can continue to talk and execute their workloads, and share the analytics across them so that everything just doesn’t come to a screeching halt.”

 

“When you talk about orchestration and a data center, you’re typically leaning on more of a central server and devices connected to that central server model. But when you talk about it at the edge, you’re talking about like you described — that distributed environment.”

 

“I think we are going to continue to evolve and grow the conversation and orchestration at the edge, so that we can get to that ant farm-like model that you just described. Where there isn’t a need for a central device to constantly be telling everything in the environment what to do.”

 

“So you’re talking about different security protocols, different methods of authenticating the security that’s running on one device before it talks to another device. All of these things create a different security paradigm, and if you don’t take those into consideration, you could introduce vulnerabilities into your network.”

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

Camille: Hi, and welcome to What That Means: Orchestration at the Edge. Joining me today to discuss Orchestration at the Edge is Abdul Bailey.
Abdul is a Principal Engineer with Intel’s Internet of Things Group. As a System Solution Architect, he is responsible for driving investment into new manageability orchestration and acceleration offload products. He’s got almost 20 years of experience in the computer industry and his technical breadth covers a range of computer architecture topics from BIOS embedded firmware and software architecture to LAN and personal area networking, bus architecture, and input output design. Abdul holds patents in manageability, UEFI BIOS, and wireless communications.
What I like about Abdul is his unique balance of technical depth, breadth and communication skills. He was the lead architect in a product line we brought to market together almost a decade ago. And he brought truly innovative ideas to the table and always reminded us of the importance of security.
Abdul, it is an absolute pleasure to be talking with you again.

Abdul B: Thank you, Camille. You know, I always love engaging with you and I’m looking forward to today’s conversation.

Camille: So Abdul to get us started, can you define “Orchestration at the Edge” in under three minutes?

Abdul B: Yeah. Yeah. I’m looking at this would be a lot of fun to do so because it’s a very complex and a very broad subject. When you talk about orchestration at the edge, what you’re talking about is really two things. First orchestration, and then secondly, the edge, right?
So let’s level set on what we mean by orchestration. You know, the topic of orchestration, which really sprung forth in the data center is really about how do you plan, coordinate and distribute, not only workloads, but resources in such a way that they can be accomplished over a homogeneous or heterogeneous computing environment.
Now, what does that mean? Well, it means that you take, you know, complex workloads, you know, for example, a Netflix or a Pokemon Go augmented reality kind of workload and you need to be able to scale that, to run various geos across the world and service customers and users on a very dynamic basis, right?
And so you need this concept of not only having the workload containerized in such a way that it can be started and stopped pretty easily, but you have to be able to distribute that workload to various resources across the globe and inside of your data center in a very, high-speed kind of a fashion.
The next part of your question is the edge. And the edge is where you’re talking about. In my context, more of an internet of things, style conversation. Instead of having these workloads running a data center, they’re going to run closer to what we call the edge. Now the edge in the IOT space can present itself in many different ways, in many different vertical business segments. If we talk about the industrial segment, we might be talking about programmable logic controllers that are on a manufacturing line building automobiles. Right? If we talk about it in the retail segment, you might be talking about digital displays or point of sale systems or ATM’s in your local, you know, retail outlet.
We’re talking about in the context of medical, you might be talking about MRI machines or gateways, or, you know, diagnostic tools that you might find in a hospital or in a medical office. I mean, it can also have up applications in the public sector and gaming, but that whole conversation about the edge is really about how do you sense and actuate and compute information as close to those edge devices as possible in those environments that I talked about.
So when we’re talking about Orchestration at the Edge, we’re really talking about how do you take those workloads that need to be done very close to those environments that I described earlier and make sure that you have the right infrastructure in place to support the networking, the computing and the decision-making, uh, analytics, you know, in such a way that it could be done in a rapid fashion and closer to the edge where it needs to be done.

Camille: Fabulous. Let’s dive a little deeper. I have actually 20 questions right now based on your definition. But one of them is simply is orchestration like a piece of software? Or is it, you mentioned networking equipment, as well. Is it like a combination of hardware in software and protocols? or is it just the software that helps you balance all of these things?

Abdul B: No orchestration is everything really. It’s the culmination of bringing together the compute, the networking, the storage, the software, the services is everything together, such that it can support that dynamic environment where you can take workloads that have been containerized, right–and maybe the micro-services associated with those workloads, which have been containerized–and have the ability to distribute them across the environment.
So for example, you know, if you’re talking about our retail outlet and you’re talking about a point-of-sale systems, you walk into some of them today and they might be sitting idle. They might not be doing anything because the store is not really busy right now. Well, how do you get those computing elements, those point of sale systems to actually be able to dynamically send them a workload for your business and have it do something while it’s not being in use.
And then when the customers start to come into the store and you need to turn on additional point-of-sales, maybe you might have to, you know, pause that workload or move that workload to another device while that point of sale system is in use. Right? So it’s the entire environment, the entire set of resources that I described, storage, networking, compute, you know, uh, the cloud, all of it coming together to provide the environment that a rich set of resources and environment to support the dynamic execution of workloads.

Camille: Okay. So basically, understanding what kind of workloads take priority and then making sure that you’re optimizing the compute, um, meaning balancing it so that you don’t have things sitting idle, um, and then prioritizing certain workloads. Like if something needs to get done right now, we put this other thing on the back burner and we take care of the immediate need.

Abdul B: That’s one case that’s very much one case and other cases actually be able to have redundancy in the system. Right? So that the important workloads that you have, if for some reason, a failure does occur, it can be quickly started and executed someplace else. So there’s a lots of, lots of conversations. Orchestration is a very big, broad topic.

Camille: Okay. So one question that is, feels like a conundrum to me, or an oxymoron even. Orchestration at the edge because you started talking about how the edge can be very, very specific, you know, it may be, um, you brought up an ATM or a programmable logic controller, uh, on, uh, on maybe a large industrial machine. But I guess if you have something that’s sitting far away from anything else how can you really balance the workload of that thing? Let’s say it’s a windmill and it’s sitting all by itself.

Abdul B: That’s a great question. Camille and bats to difficulties of orchestration at the edge, right? The fact that the resources are not necessarily as co-located, as you might find it in a data center, the security model changes when you talk about Orchestration at the edge, right? Because you’ve got the cars in the data center, you’ve got everything behind a wall, guarded and there’s plenty of security. But now you’re talking about ATM’s that are sitting out in your local store. You’ve got a digital displays that are sitting at the airport, right? So your security model has now changed. The resources are very different. Then they’re now a heterogeneous set of resources. They’re not all the same, right? They’re not running the exact same compute with memory and storage. It’s very different.
And so you need to be able to look at, okay, you do, you definitely have workloads that need to be done, right? But you need to have those workloads constructed in such a way that they’re one, containerized–meaning that all of the resources needed to execute that workload is in that container and you don’t have a heavy dependency on, you know, a bunch of patches to the operating system to make it work, right.
And once you’ve got things containerized, you want to be able to have that flexibility to understand what those resources are at that edge, and then determine where to send them to be executed. So this is the complexity that you find in an, in an Orchestration at edge conversation that you’re not going to see in a data center style conversation.

Camille: Okay. To pile onto that there’s other things that exist at the edge, and I’m wondering how you’ve addressed them. So things that I’m thinking of are maybe intermittent connectivity, which might be on purpose to save battery life, or it might be because, you know, the satellite is, I don’t know, on the dark side of the earth or some bad example, but

Abdul B: Right. Yeah. We have plenty of cases where, you know, network connectivity is an issue. Like, can we talked about, you know, uh, if you look at, uh, you talked about the windmill or you talk about an oil and gas rig that’s sitting out in the middle of nowhere, right? If network conductivity goes down, do you want that workload—that analytics workload–to stop working just because it can’t talk to something? No. You want it to be intelligent enough so that the windmills in the area or the oil rigs in the area can continue to talk and execute their, their, their workloads and share the analytics across them so that everything just as a come to a screeching halt.

Camille: Okay, so then what about that also then starts kind of begs the question, do you ever do sort of distributed workloads where there’s, you know, rather than the central, I guess, orchestration engine telling everybody what to do, do you ever have like the windmills connect to one another and sort of deal with that, even if their whole specific grid is not connected to a central server? I don’t know why I keep saying windmill. There could be a million other things.

Abdul B: No, no, you’re right, though. You’re right, though. Um, you know, when you, when you talk about Orchestration and a data center, you’re typically leaning on more of a central server and devices and, you know, connected to that central server model. But when you talk about it at the edge, you’re talking about like you described that distributed environment, right? And so you need to have the ability for one device in that environment to start up as it’s the most significant or parent device in the environment. But if for some reason it goes down, you need something else in the environment to switch and take on that role immediately. Right? And so that level of intelligence and redundancy and reaction time has to be built into that Orchestration at the edge conversation, which might not be as prevalent or demanding in the data center style model.

Camille: Does orchestration inherently mean you have to have somebody in charge? Or can you ever have a situation where everything is sort of equally figuring out what it needs to be doing?

Abdul B: Well you do need to have something that describes that workload, right? That describes those containers that need to come together to do what it is you need to be done in that environment. So there is that persona exists. But the goal is once you’ve described that workload, and once you’ve been able to figure out what you need to be done, you would like to tools at that point to take over, right?
So there are tools like Kubernetes, Docker Swarm, Nomad, Red Hat Open Shift. I mean, there’s lots of examples out there whereby once you’ve described that workload and you’ve kicked off these tools so they can understand what are the resources out there, they can take a look at that workload and the resources out there, and then schedule things appropriately to get executed in the environment and provide the results that you’re looking for.

Camille: But it’s never going to work like an ant colony where everybody knows– because in theory you could have each device, um, that’s part of this network or community, understand what its own prioritization is. Like, if, if there’s an ATM request, I shut down everything else and that’s what I deal with. That’s my priority. And then allow it to kind of pick and choose based on what it’s good at, right? “Either I have a very powerful processor or I don’t have a very powerful processor, so I’m going to pick a light workload.” Or “I’m on intermittent connectivity, so I’m only going to pick little tiny bits of things at a time.” But you’re not having that come ever like in a more federated model, that’s always going to be some kind of a orchestration software that’s like centrally or …

Abdul B: We will continue to strive towards that Holy Grail. I mean, I’ve seen environments where, you know, you, you can have the scheduler understand that there’s a workload that needs specific computer vision acceleration hardware for it to execute, right. And maybe that’s a limited resource in the environment. But then you could come along and add another piece of hardware to that environment and added to the cluster and now you’ve got two resources that have this capability. So the scheduler is intelligent enough to understand that it’s now received additional resources, they can now distribute those workloads to more than just that one machine that had had access to previously, right?
So I think we are going to continue to evolve and grow Orchestration at the edge, the conversation and Orchestration at the edge so that we can get to that ant farm-like model that you just described where there isn’t a need for a central device to like constantly be telling everything in the environment, what to do. Right? But more setting policy and our policy gets deployed in the immediate environment and then the environment is restricted responding to that policy and accomplishing those tasks.

Camille: Are there any things that people who are working on orchestration at the edge are like desperately waiting for? Like once we have this kind of encryption, or once we have this kind of networking or connectivity, then the world is a game changer?

Abdul B: So, you know, I can’t talk about the entire industry, but I can tell you some of the things that we’re focusing on is looking at enabling capabilities that help with accelerating workloads. So, you know, workloads that require a deep learning or computer vision style capabilities, workloads that require greater security algorithms, right? Workloads that require, uh, time-sensitive networking, workloads that require the flexibility of compute–a larger compute set of resources that can be scheduled to run things so that they can be executed in real time, right?
These are all tasks that we see are really important to the conversation of orchestration and we’re spending a lot of time to enable in the open source ecosystem, which, you know, in the one we have the biggest concentration of our attention on is the Cloud Native Computing Foundation. This is, uh, one of the largest orchestration open source ecosystem plays that is actually very dynamic and very, very heavily used today by a number of contributors in this space.

Camille: So you mentioned machine learning. Um, and I feel like that is a giant elephant in this room of Orchestration at the edge. Are there a couple of themes that you’re particularly interested in when it comes to machine learning and this topic?

Abdul B: Yeah. You know, one of the big ones we’re focusing our attention on his Computer Vision–everything from putting cameras in the actual assembly manufacturing place, so that you can do real time analytics and inspection of things that are being manufactured for quality purposes. There’s a big push for doing greater facial recognition and object recognition at the edge, right? So again, making sure that we have accelerators that can help with executing those workloads and making those decisions as close to where, to where the, you know, the computing and the decision needs to occur.
Even in the medical sector section, right, where we’ve got a big need for being able to analyze the medical imaging that’s occurring and trying to diagnose and make decisions about those images right there in the moment while the actual image is being taken versus sending it off to someplace else and waiting for results to come back, right?
So that is one of the sub segments on the deep learning, you know, the Computer Vision acceleration space, where we feel as there’s a significant demand and there’s a lot of opportunity for us to drive innovative change into the tools and into the ecosystem there that’s out there.

Camille: Abdul, I’d like to dive into that just a little bit more when we talk about security, use-cases specific with Orchestration at the edge. Can you enumerate a little bit on that?

Abdul B: Yeah. Yeah, no problem. When you talk about orchestration in the, in a data center style model, a lot of the security is addressed because the workloads, the environment is all walled off, right? You’ve set up a data center somewhere in the globe. And there is a physical security that’s provided to protecting the resources and the information that’s being exchanged there.
But when you talk about orchestration at the edge, that security model is very different. It’s very dynamic. You’re talking about resources that might be operating in one physical location, but sharing its results with another resource in a close proximity, in another location. And the physical protections of those resources are different, right? They could literally be sitting on the actual street in front of your house, right. Or they could be literally sitting in our retail establishment while customers are walking past those devices.
So you’re talking about a one, a, a security model that requires greater protection of the things executing in memory, and that the workloads executing on those devices. But you’re also talking about a different security model in those two devices, speaking to themselves in real time. So you’re talking about different security protocols, different methods of authenticating the security that’s running on one device before it talks to another device.
So though all of these things create a different security paradigm that if you don’t take those into consideration, you know, you could introduce vulnerabilities into your network. You don’t want your, uh, digital displays that are now displaying information for travel times for customers at an airport, but it’s now going to execute a workload for purchasing of a, of an airline ticket, because you’re trying to make that decision about. Do your transaction as close to, as close to the edge as possible to have be vulnerable for somebody to walk up to that device, plug in something and be able to sniff and snoop and hear what’s going on on that device, right?
So those security models and those security conversations are, like I said, they are different and you have to take those differences into consideration when you talk about executing and distributing those workloads in those environments.

Camille: So in some cases, does that just result in, um, going back to an older model of isolation of use case when something’s really critical, like it’s doing functional safety, or it would have access to personal information?

Abdul B: Yeah, you’re right. You’re right. Lots of times you’ll find actually orchestrated workloads running inside of virtual environments, right? So even though it’s not a real VM per se, it’s actually a containerized workload, but it’s a containerized workload running inside of a VM. Why? Because not VM actually can create that segment, that isolation, you will need to help protect things, right?
So we you’re right. We are using somewhat of legacy style, you know, protection capabilities with bleeding edge, you know, containerized, orchestrate style workloads to get the benefits of both worlds coming together to solve problems.

Camille: But I think possibly your point is it may not be enough to have a, uh, traditional security architect who operates in a data center kind of model as somebody who’s more familiar with different kinds of attack surfaces, different kinds of protocols for wireless communications, et cetera. who’s more familiar with that edge environment.

Abdul B: Yes. I think you, you need to get both security architects together and they need to, to brainstorm what is the right way to address the issues that you’re going to face. I mean, in some cases, you know, there are some environments where they don’t want the data and the analytics of the data to leave that environment. But you have to figure out how do you run a workload in a cloud-like solution, but in an on-prem style model, right? Again, those are all dynamics that need to be taken consideration when you’re, when you’re talking about the security paradigm for your business.

Camille: Is there any other sort of final thing you think that people should be aware of when they’re starting to explore this concept?

Abdul B: You know, I would say you got to take a step back and look at the bigger picture first, because the whole concept of orchestration and manageability, like I said, it really started at a data center. You got to understand that history. You got to understand where it came from, right?
But then you got to go and take a look at what is the difference what are the real needs for this conversation of orchestration at the edge, right? You can’t just go and take what you’ve done in the data center and apply it to the edge and say, you’re done. No. You’ve got to go understand what those nuances are for these workloads running at the edge. And then you got to go figure out, well, what are the Delta? What are the changes that you need to drive and, and accomplish into some of the existing tools that are there? or what are the new tools that you need to bring to market to help support and facilitate the evolution of IOT style workloads at the edge in an orchestrated model?
It’s a time consuming task and you know what, if you take the time and you do that due diligence, you will realize that there’s plenty of opportunities in this space. Uh, IDC actually identified that by 2023, you know, the Edge Orchestration’s software space is actually going to grow to a $513 million worldwide market opportunity, right? So that mean that tells you that there is a big need here for, for many of our customers and the software vendors and system integrators out there, we’re going to need assistance from companies like Intel and others to help, to build and to help to educate them on these opportunities and the tools and the frameworks that are available for them to grow and evolve their businesses into these spaces over time.

Camille: Well, Abdul, this has really been a fascinating conversation. I wasn’t sure that we could cover something as complicated as Orchestration at the edge in 15 minutes, but I feel like at least it’s a pretty decent intro. Um, thank you so much.

Abdul B: Oh, thank you, Camille. You know, I love talking to you. I think you’re what you’re doing with your program is fantastic. And I think, you know, any, any opportunities we can to help to open our partners and the communities eyes to, you know, opportunities that are out there is, is very valuable for everybody involved.

Camille: Okay. Thanks again, Abdul and thanks everybody for listening. Be sure to check out other episodes of Cyber Security Inside and What That Means.

More From