[00:00:00] Camille Morhardt: Welcome to this episode of Cyber Security Inside. Today we are going to talk about developers and all aspects of what that might mean. So What That Means Developer. We’re talking with Bill Pearson, who I got to know back when I worked in the Internet of Things Group. Bill is a Vice-President in the Internet of Things Group at Intel, and he specifically focuses on developer engagements. He’s got a background in software, hardware, reference designs–tools pretty much everything there is. And he’s a really fascinating person. Welcome Bill.
[00:00:34] Bill Pearson: Thanks Camille. Glad to be here.
[00:00:34] Camille Morhardt: It’s good to have you here. Let’s start with the classic intro for What That Means, can you please give a shot at defining Developer?
[00:00:45] Bill Pearson: Yeah, I think the simplest definition is something and there’s lots of ways of doing that. A lot of times when I hear developers, I think software developers, people building applications that are fun and interesting. They can be consumer, they can be business; they can be front-end or back-end. But really developers are so much more than just the software developers. You know, we’ve got hardware developers who are building systems and hardware pieces, hardware devices that we all use, like our phones. But also the big industrial manufacturing devices that are used to build cars, tractors and things like that. And increasingly developer as a diverse group with things like dev ops becoming such a key part of not just the application, but how all the pieces of software come together to create solutions.
[00:01:31] Camille Morhardt: Okay. That was very succinct for a term that I thought might not fit into any kind of a paragraph for a definition. When people become developers, they have to decide am I software or hardware? Maybe that’s blending a little bit more now. Am I developing for an OS device, Android device? Am I going to develop an IOT or is it by vertical? I’m going to do applications for Internet of Things or I’m going to do applications in the financial sector or consumer? How do they even begin to figure it out where they fit in the giant world?
[00:02:08] Bill Pearson: Yeah, I think this is like an in-way problem where there is no sort of right answer or even a common answer. Most developers get trained through school or on their own, through some form of maybe programming or maybe hardware design. I think you could draw a line between hardware developers and software developers–not always clean, not always that distinct, but in general there are some different disciplines there.
What gets interesting for me is when you start talking about software developers, oftentimes it’s the technology that gets interesting for the developer and that technology interest changes over time. Back in the day, I had people looking at C++ development, and that was the thing and so they become experts. The very best developers keep learning every day. Now those same developers might be doing Python and they’re building AI applications. They have evolved their thinking, they’ve evolved their toolbox of techniques, programming languages, APIs and models that they would work on.
So one choice is the technology piece. What am I interested in? What am I learning these days? What skill is going to get me a job in the industry? And then how to apply it is what I think what you were alluding to there with am I going to apply that to a consumer tech? cause it gets me really excited to see a bunch of consumers using my applications or, am I going to take some specialized route down a vertical?
The folks who are dealing with manufacturing, as an example, you get to work in factories with big devices and robots that are moving around. It’s very exciting to be in that industry, it’s like no other. While it’s not a million applications on somebody’s phone, it is a whole great world of robotic arms and building real things that real people use every day.
There’s all of this that comes together for a developer, including where they work. Sometimes they might work as a developer-for-hire, taking a contract to build something for someone else. Sometimes they’re at a startup inventing their own products and ideas that they then get to go sell to the world. I think as we look at developers, it’s this complex cube of the skill set in the technology using to build their applications or build their technology, then you couple that with the type of industry that they might be in, the type of vertical market that they might want to sell to, and then how where they work and get their money or their reward for doing that work.
[00:04:33] Camille Morhardt: So one of the things in the security space that we always think about is training developers to break something because it doesn’t necessarily come naturally, right? Developers are building something, they’re trying to make it work. They’re looking at a specific use case and making sure the functionality is accurate. Whereas breaking something is a different mindset. So how are those two things coming together as security becomes more important?
[00:05:02] Bill Pearson: There’s a set of practices around security that are starting to become more and more common. As we think about our own coding practices, a lot of times we’ll run code reviews to take a look at what we built and have other people comment on it and provide feedback. And so there’s a whole security development life cycle around that. We take a look at threat modeling, what could break here and then do security validation and penetration testing–where deliberately try and break things and see if we can get into these environments that we’re creating. I think that has started to become more commonplace than it used to be certainly, as developers are thinking more about security and realizing that they need to at least consider it as part of the applications that they’re developing.
I’ll share one anecdote I heard recently that I thought was fascinating. A lot of developers are building for the cloud these days and thinking about, “Hey, maybe that cloud environment, the security is someone else’s responsibility.” But some data from the NSA that I saw said that misconfigured cloud resources, is one of the most prevalent cloud vulnerabilities out there. And that comes from maybe me misconfiguring the cloud resources that I have.
So it’s this idea of shared security as well, where not only do I need to expect that the providers have the tools and the resources that I’m using think about security, but that I’m thinking about security in my day-to-day activities, including trying to break things and trying to penetrate my own systems and see if I am as robust as I think I should be.
[00:06:37] Camille Morhardt: Do you think that developers are in the space of making trade offs right now between say user experience and security? Or have we gotten to the point where it’s not an either/or, that you can have both?
[00:06:51] Bill Pearson: You certainly are always making trade-offs as a developer, it’s sort of part of the life. One example that I can think of recently we’ve had to go through trading off security versus experience is sort of how and when to implement simple things. If you think about a login requirement, we have a system that has a login. We require developers to go and access that. Where do you put the login? Do you want to put it at the beginning? You can’t see anything at all in here until you create an account, register, and log in. Now you can see everything. Or do you want to move it? And how far in do you move it? So maybe I can see a catalog of things, but can I download any of those things? No, you have to log in to be able to download something.
And so those types of trade-offs, we actually find that they do trade off security versus experience. How easy do we make it to access the things we provide? But it also helps us answer, let’s say, insightful or thoughtful questions like what are we really trying to protect? And what’s the best way to protect that? Do I need a login? And if I do, does it need to be here or can it be somewhere else? That’s a fairly simple example, there are certainly more complex examples. I do think that developers every day are trading off where and how to approach security versus how to make an experience that their customers are going to want to use.
[00:08:10] Camille Morhardt: Do you think are developers are actually, for the most part, thinking about security these days, or is that still something that only people in certain fields are paying attention to?
[00:08:22] Bill Pearson: Well, the news headlines make it much easier to think about security now. So if I’m thinking about, at least my view of developers is not all of them are thinking about security, but I think they all should be.
If you can imagine your application, your work being a New York Times headline is probably a good enough reason to think about security. But you can be a good judge of developers, they could be their own judge of what I’m building and what’s the cost of that security breach that might happen.
We were talking before the show here about someone who has a lot of public information about themselves. They may not be willing to put in the effort to protect it because it’s probably out there anyway. Whereas, I might be much more private about some of that information for myself. And so I’m going to have a different level of security than the next person. I think that’s true for developers, which is why you see some people thinking about security more than others.
One thing, though, that is really important is to make sure that whatever stance you take on security as a developer, that it’s thought about from the beginning. One of the tools that my team has been working on is this thing we call Dev Cloud. It’s a simple way of taking hardware that’s diverse, hard to get your hands on, and maybe expensive and putting it all in one place where we can make it available to a bunch of developers out in the world. And as we started doing this we had asked ourselves a lot of security questions. What we found is in the areas that we ask the questions upfront, we could do a great job of building those solutions, testing those solutions, getting other people’s advice on the solutions and reworking them as needed. And in the areas where we waited until the very last moment to start thinking about security, it was much more challenging to do the same job, right? We had to hurry to catch up, or we had to de-feature things to maintain the security that we needed to maintain.
[00:10:14] Camille Morhardt: You’re in charge of engaging with developers at a big company. I’m wondering what kinds of things are best practice, state-of-the-art doesn’t necessarily need to be Intel? What are people doing to engage with developers? I would think we just saw a tremendous boost in desire for development in general, over the last couple of years. What are some of the newer things that companies are doing big or small to try to get the pull there?
[00:10:44] Bill Pearson: A few things, one of them was using best known practices. There’s a lot of documentation, a lot of people who have done this before that are willing to share their implementations, whether that’s security consultants. One of the things that we did as an example, is once we thought we had a product secure, we went and hired someone to come and try and break it and give us advice, and say, did we really apply all the industry best-known methods? There was plenty of opportunities for developers to do that. And we see again, just by the fact that there are consultants that do this, there’s people who are willing to share their expertise and knowledge, and there’s more and more demand for that in the industry.
Other things that we’re seeing happening is people really being willing to publish and talk about what are the BKM’s that I could take and implement on my own. A simple example to think about is, “Hey, I’m going to grant route access to devices.” And if you’re trying to implement secure practices, you won’t do that, right? These are common sense, but these are also things that again when you put them together, you’ve got a list of industry practices that are easy to go, find, and adopt.
We also see companies who are finding different ways of identifying the root of trust for their devices, their software. Microsoft, for example, just say “every device that runs Windows 11 needs to have a trusted platform module, this TPM.” And so they’re getting more serious about security there and making sure that the root of trust on their Windows platform is more robust than it has been in the past.
People paying more attention to hardware or software route of trust, but that really powerful root of trust. And then use that as a basis for building out the rest of their security infrastructure. And some of that infrastructure is going to include things like secure enclaves to go and hide information that you want to protect in a more rigorous fashion; using encryption, both while things are at rest, as well when they’re in motion—so when I’m moving data from one place to another, so different types of encryption that can be used across different devices, but the net-net kind of is that people are implementing a more robust encryption rather than keeping passwords and models and algorithms out in the open like they may have done in the past.
[00:13:08] Camille Morhardt: And are developers looking for companies that are supplying them with certain kinds of experiences or certain kinds of APIs. How do you maintain relations with developers?
[00:13:19] Bill Pearson: When we talk to developers in general there’s a couple of traits that I see. One is a lot of times they’re just focused on trying to get a job done. When they have that job, I want to build a model that will do defect detection of this weld. And it’s fairly unique and I’m looking for porosity in a weld so that I can check it with a camera versus a human that’s a tough job. So how do you help them create that model? How do you help them protect that model? How do you provide the performance so that they can check that model in real time as the manufacturing line moves through its cycles? All of that goes into just helping a developer get the job done.
And the questions that they ask and we try and anticipate the questions as well as the answers–sometimes we’re good at it, sometimes not. But the questions that they ask range, from, “Hey, how do I improve the performance of this particular model” to “How to do I reduce the memory footprint of this model?” So lots of different types of questions, sometimes it’s even what’s the best hardware for me to use? Depending on what they’re trying to do the answer is going to vary all over the map.
So helping the developer get their job done is the first and, I think, most important thing. And beyond that, that’ll take you in a million different directions. Because the providers of solutions, whether we’re talking about VS code or you’re talking about some particular AI application, the providers of tools that developers use are the ones who are going to make it easy for them to take the tool and apply it to the job that they’re trying to get done. And most often that job is going to change day-to-day. We’re looking at a fairly robust tool that does a lot, but at the same time fits into the developer’s workflow and their view of how the world works.
[00:15:09] Camille Morhardt: I feel like in the last few years there’s been just increasing number of attacks and kind of publicity around attacks on hardware and on critical infrastructure, which can be hardware. How are you securing hardware or helping overall security through maybe a software developer who’s interested in security, but has to rely on the hardware to some degree? Is it up to the developer to make sure they understand those ties into the hardware or a hardware providers working on that?
[00:15:40] Bill Pearson: Yeah, all of the above is true. One of the things we started building we call reference implementations. And there are solutions that are fully open and fully baked. So we can say, “Hey, let’s look at this point-of-sale fraud detection solution, and we’ll show how we built it.” What hardware is used, what software is used–so the full build of materials–and then we’ll provide instructions and code for how to build it and give the code to the developer so they can go build it themselves as well. And then we’re increasingly taking those reference implementations and helping the developers to be able to run those in the Dev Cloud that we call it, right? So if the case their hardware is a small edge device that can still take that reference implementation and go run it there.
And the notion is that by showing them how to build that solution, we just help get their mind around what types of hardware I needed, what types of software, how I implement security, how I might implement AI, how those two work together and with that orientation, they can then get a solid example of how to implement it. From there, they can start saying, “okay, for my application, I’m going to take these things that are similar, and I’m going to add it with these things that are different and unique.” But they’ve got a great foundation to build on.
[00:16:54] Camille Morhardt: Are those offered basically by vertical? Is it like I’m operating in the Internet of Things, industrial manufacturing space, so I’m going to grab a reference design for that, or is it more of a horizontal kind of a reference design?
[00:17:08] Bill Pearson: Yeah, we’ve implemented them more in a vertical-specific use case fashion. And the reason we’ve done that is some of the feedback that we’ve gotten from developers. So one of the things I love about this job is the ability to learn. I constantly have to try something, see how it works, listen to the feedback from the developers that are using it, adapt it and change. So this great learning loop that we’re going through.
What we found is that when we can help provide a developer with a specific solution to a specific problem it resonates more with them. In fact, I’ll share, we ran some tests on some of these early reference implementations, and we found that the developers could get through the mechanics of the solution, how to code it, how to implement it. But the question that came back was, “why am I doing this?” And so when we could apply it to a particular problem that they were trying to solve–coming back to weld porosity detection or point of sale fraud detection–then it resonated. They said, “oh, now I understand.” That’s why I’m using this model. That’s why I have this application. That’s why you asked me to use this hardware.”
So we’ve created a variety of these based on common use cases that we see happening in the industry. And everyday we wake up trying to say, okay, what’s that next use case that a developer would be interested in and how do we sort of build it, test it so that it gets dialed in to something that’s going to be useful for those developers.
[00:18:30] Camille Morhardt: You’ve worked with developers for a long time, right?
[00:18:32] Bill Pearson: Yes. A long time.
[00:18:34] Camille Morhardt: Have you noticed any kind of changes?
[00:18:39] Bill Pearson: I have. One of the changes that I’m seeing is there are a lot developers are much more interested in AI these days. So start with that one, in the past, what I’d seen is there’s a lot of developers who are kind of in their niche technology area. And you might say, well, I’m a C++ developer, as we talked about before, or maybe I’m going to do something in Python or Go. Today, what I noticed is that regardless of the use case, regardless of the industry, regardless of the technology that AI seems to be showing up for developers. You know, we’re seeing it in the PC or developers are trying to say, oh, can I use AI to reduce background noise in telephone calls? We see it in the data center as we’re training these large models to recognize things. You certainly see it at the edge where a lot of the inferencing use cases are running–whether it’s a car trying to detect lanes on a road or whether it’s a defect detection that we’ve been talking about where I’m building something and I want to see if that weld is done correctly.
And that’s changed a lot of thinking, because now with these AI implementations, you have algorithms and these algorithms are the secret sauce, so they need to be protected, right? My models need to be secure and that brings us around to security where I think security has been fragmented in the past; some developers don’t care at all about it, and some care a lot about it. But when we start seeing AI come into place, there’s a much higher at least recognition of the need to secure the application, so that AI is able to stay protected, stay secure, stay implemented in the way that the developer intended.
[00:20:23] Camille Morhardt: Why is that? I mean, why is AI such a catalyst? I’m guessing it’s because we’re gathering private information in order to create models or IP information that we don’t want out anywhere else. Why is that such the driver?
[00:20:38] Bill Pearson: Yeah, I think the data is one, absolutely. When you think about why can we do AI now that we couldn’t and there’s many reasons, but one of them is this large data sets that are starting to exist. I think about all the rules for what to do with data, one thing is just being careful about, “Hey, how do I protect PII?—so personal data;” “how do I protect sensitive data, whether it’s personal or not, but sensitive to my business?” “How do I make sure that I’m only letting people who have a need to access that data, access it?” All of these type of requirements and more require security, so the developer needs to think about it there.
The second thing is that from a model or algorithm standpoint, once I use that data to go train a model, now we need to secure that model. Cause oftentimes that’s my IP, my proprietary information. So it’s not just data that I want to protect, but it’s how I’ve used that data to create this AI algorithm now that I’m going to go use to drive my business.
[00:21:37] Camille Morhardt: Okay, so that’s kind of all aspects. Is it the same developer working on the model as the inference as on actually just even classifying the data labeling or are those all different people?
[00:21:50] Bill Pearson: Yeah, it depends on the business and the model; you know, say if it’s a small team, it’s going to be all the same people. But some of the interesting things we found is we did some research a couple of years ago on what do these teams look like–these AI teams–and people who are taking data sets and training dates, training models and then people who are going and building applications of those doing inference and such. In the past they’d largely been separate entities where you have one person who’s working with the data and they’re sort of throwing it over the wall to these other teams. And today we’re seeing much more integrated teams. While the people may still be specialized in the work that they do, but they’re part of an integrated team that’s working on this particular application. And so that’s a little bit of change in terms of how the folks are organized.
Whenever we approach a problem like this, one of the things that I found really helpful is this idea of identifying the actors and sort of their journey through the development process. And so you were talking about data labeling for example, and who’s going to do that? How does it work? What tools do they use? What’s the workflow like for somebody who’s going to go in and label data. And then once I’ve labeled that data, cleaned it up and I want to go start training that model, what does that function look like? And then the question who’s doing it? And so if it’s going to be the same person, how do you make sure the continuity of the tool chain is there? And so they can easily move from one task to the other along that journey; but if it’s a different person, how do we make sure that the data is easily transposed from the first person to the next one in line there as they go through their tool chain?
[00:23:26] Camille Morhardt: I suppose kept private or secure, confidential the whole way through. Not everybody’s in the same location, especially now. So you’re digitally transferring that information from person to person, I guess, as you collaborate.
[00:23:39] Bill Pearson: Yeah, either transferring that information from person to person or from tool to tool, if it’s kept in the same repository, giving access to that data at the right time to those various people–around the globe, as you point out.
[00:23:51] Camille Morhardt: Has a developer or like any kind of a competition or event that you were at ever done, something that you’ve put out or your team has put out expecting one thing, kind of paving the path and for the direction that developers are going to take it and got back something completely different?
[00:24:08] Bill Pearson: We used to do hackathons with developers, where we would put new technology in front of them and say, “Hey, go figure something out with this technology”. I’m always surprised at what developers do. The answer to the question was, well, yeah, all the time. Sometimes it’s seeing how creative they can be with a piece of technology. You know, one thing that strikes me, there was a gentlemen who won a contest in the past year or so. And he was doing clean water detection, which was kind of cool. And it was just his way of saying, “Hey, look, I want to take AI and apply it in a way that’s meaningful to the world.” And that perhaps shouldn’t have been surprising, we use it for detecting all kinds of things, and clean water makes sense. At the same time in some of these Internet of Things competitions that we’ve done I’ve had developers build smart trash cans and okay, what’s that for? and why?, and use it for all kinds of other creative endeavors.
But to me, sort of the nature of developers is their creativity, their innovation, and their willingness to try new things and see what sticks. So you always end up then with some technology that you’ve built and then the developers are applying it in new and creative ways to their own applications. And to me, it’s one of the most fascinating and satisfying things about being in this world with developers.
[00:25:31] Camille Morhardt: Okay. So at the risk of having to speculate about things, I wanted to ask you about ethical AI. We’re talking about AI, we’re talking about security in this, but also ethics, which kind of maps over with security and privacy. Are you ever surprised at what comes up in that space?
[00:25:50] Bill Pearson: Yeah. This is a space that has been fascinating because there’s all kinds of concerns around ethics and AI that maybe you don’t think of it at first blush. And so if you look at it as an example of, “Hey, I’m going to identify a face in a picture.” There’s a lot of advantages and goodness that can come from that. There’s also a lot of risk. So you look at what harm can come from AI and what we’re doing there? And what we’ve had to do is build a set of principles around what does it mean to be ethical in AI? and how are we going to make sure that we’re not building things that are going to be a detriment to society? Or that the things that we’re building are going to be used in the way that they’re intended. That’s a complicated task sometimes, but it’s also something that we feel is important and worth doing. So we’ve tried to build it into our education of ourselves. We built it into the license agreements as we’re licensing technology to developers.
But I think more importantly is we’re trying to build education of folks on what does it mean to really drive ethical AI? and how do we do that together in a way that we’re going to like the results at the end of the day when this stuff gets out there into the real world?
[00:26:59] Camille Morhardt: Yeah, that’s interesting. I will just let people know in case they’re interested in hearing more about it, that I interviewed Chloe Autio specifically on Responsible AI–which of course could also be called trustworthy AI or a variety of different terms; as well as Ria Cheruvu, who’s the Chief Ethics Architect. I interviewed her on deep learning, and we went into conversations on AI ethics as well. So yeah, thanks for bringing that up. Very cool. Is there any other thing that I should be asking you about to give people a feeling for who are developers and how do I work with them?
[00:27:34] Bill Pearson: I don’t think there is anything else. The only thing that I might offer as advice for people who are looking to understand more about developers is there’s lots of places you can go and hang out with developers. Stack Overflow is the first one that comes to mind. Look at some of the questions that get asked, the type of answers that are being provided, the way the community comes together and works with each other to solve some of these common challenges, problems, things that developers are trying to work on.
[00:28:03] Camille Morhardt: Thank you so much, Bill for your time. I really appreciate it. Bill Pearson is Vice-President of Internet of Things Group at Intel.
[00:28:09] Bill Pearson: Thanks Camille.