Skip to content
InTechnology Podcast

#51 – What That Means with Camille: Responsible AI

In this episode of What That Means, Camille is joined by Chloe Autio, who works in the Public Policy Group at Intel; she sheds light on the concept of responsible AI, a governance framework that takes ethics into account in the development and regulation of emerging technologies. A fascinating and timely topic, so be sure to tune in.

 

We cover:

  • What is meant by the term “Responsible AI,” and why it’s phrased that way
  • Why diverse stakeholdership is vital to mitigate harm and make technology more inclusive
  • Who the technologies are responsible to
  • The kinds of considerations that need to be taken into account during the development process, including looking toward the past
  • Why AI shouldn’t be considered purely good or purely bad
  • The importance of transparency

… and more!  Give it a listen!

 

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

 

Here are some key take-aways:

  • -Responsible AI is essentially the idea that there is a shared collective responsibility in developing and regulating emerging technologies.
  • -Having a diverse stakeholdership boosts inclusivity and fairness throughout the entire AI lifecycle, and also mitigates harm.
  • -Thinking about the context in which a technology will be used or deployed is crucial to determine where the most significant impact will be felt.
  • It’s also important to understand the past so that we don’t perpetuate harmful structures in the future through these technologies.
  • Transparency is key to development, because it connotes a level of accountability.

 

Some interesting quotes from today’s episode:

“I feel like the term ethics doesn’t quite encompass all of the issues that we’re talking about when we’re thinking about governing or making AI more responsible. And for that reason, I really prefer the term responsible AI.”

 

“Safety, privacy, inclusivity, fairness. What I think is meant by responsible AI is everyone having a shared responsibility to think about all of those issues from the beginning to the end of the AI lifecycle.”

 

“I think part of this work is really trying to figure out and understand both the good and the evil to make the good all that much better and the evil all that much less.”

 

“When we’re thinking about responsibility in this space, as we move forward, we really need to think about and understand the past and how to make interventions and corrections to some of the structures and systems that have foundations that we, as a society, aren’t very proud of.”

 

“You can’t have the explainability without the transparency.”

 

“When we’re thinking about Responsible AI and, particularly the responsibility component, I think the term transparency is so much more critical, because it also has an element of accountability.”

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

[00:00:00] Announcer: [00:00:00] Welcome to What that Means with Camille, companion episodes to the Cyber Security Inside podcast. In this series, Camille asks top technical experts to explain in plain English. Commonly used terms in their field, then dives, deeper giving you insights into the hottest topics and arguments they face. Get the definition directly from those who are defining [00:00:30] it.  Now, here is Camille Morhardt.

 

Camille Morhardt: [00:00:36] Hi, and welcome to What that Means: Responsible AI. We’ve got Chloe Autio with us today, joining us from Washington DC. She works in the Public Policy Group at Intel, where she does partner closely with researchers, business leaders, legal teams, policy teams, and also customers to help shape policy and best practices for the responsible design and [00:01:00] deployment of Artificial Intelligence and other emerging technologies.

She has also worked on policy as it relates to privacy, including facial recognition. Chloe has an economics degree from UC Berkeley, where she also studied a range of topics related to technology policy, ethical data use, and the social implications of computing. Welcome Chloe.

 

Chloe Autio: [00:01:21] Thanks Camille for having me.  Really happy to be here.

 

Camille Morhardt: [00:01:25] I’m super excited for this conversation. And I’m wondering if you can kick it off [00:01:30] by defining Responsible AI in under three minutes. And before you start, I’m also just going to ask you, or note to people listening, I’ve heard lots of different kinds of words.  So struggled a little bit with, is it Responsible AI?  Is it Trustworthy AI? Is it Accountable AI? 

I think there’s a, a number of different terms floating out there. We’re going to go with Responsible AI, but maybe you can include a little bit of a more breadth than your definition. 

 

[00:02:00] Chloe Autio: [00:02:01] Yeah, absolutely. I think that’s a great way to start the conversation with this kind of framing, because as you said, there really are so many ways to sort of talk about this issue. And when I say this issue, I mean, sort of a compilation of so many different issues that come into play when we are thinking about and talking about Responsible AI. 

When I started in this field and was studying this in school, it was really all about the ethics, like what’s ethical?  And if you have [00:02:30] a decision being made in automated way, how do we put ethical guidelines around that? Or how do we think about that ethically? 

But I think as the technology evolved and developed and matured, you know, we really started to realize that, trying to figure out whose ethics or what ethics to use or you know, how to apply this complicated term, right? Ethics, I think that means so many different things to so many different people in so many different contexts.  You know, how do you apply that to this technology as it’s continually being [00:03:00] used in more places and becoming more advanced.

And so for that reason I feel like the term ethics doesn’t quite encompass all of the issues that we’re talking about when we’re thinking about governing or making AI more responsible. And for that reason, I really prefer the term Responsible AI. And what that means to me, particularly in the context of, you know, all of these other terms is that when we’re developing these new kinds of systems and using them and thinking about all of these issues that we want to consider, [00:03:30] you know, potential for harm bias, right? Safety, privacy, inclusivity, fairness. What I think is meant by Responsible AI is everyone having a shared responsibility to think about all of those issues from the beginning to the end of the AI lifecycle. 

So it’s not just, you know, up to data scientists or engineers or just policy folks to regulate the technology. There’s a shared [00:04:00] responsibility in our society and our business to invite more people to share in that responsibility and, um, have some sort of collective ownership around designing, developing, and deploying these technologies in a way that helps us think really critically about building them in the most inclusive and accessible way possible.

 

Camille Morhardt: [00:04:24] The one thing I’m wondering is it sounds like you’re suggesting that Responsible AI is more [00:04:30] about the responsibility of a bunch of different stakeholders who are making AI or using it than it is about the technology itself. 

 

Chloe Autio: [00:04:40] That’s exactly right. And you know, I think one of the most critical elements of building any emerging technology, any technology system that has not been used or deployed before, and especially with AI, where we’ve seen great potential for harm, um, is that the input and the conversation, the, [00:05:00] the experience, the advice of this sort of diverse stakeholdership is what is so, so important to mitigating harm, making the technology more inclusive and really figuring all of that out.

 

Camille Morhardt: [00:05:11]  Okay, great. So let’s dive a little deeper. My first question is. Who is the technology responsible to, if we’re talking about say Artificial Intelligence, is it being responsible to know the [00:05:30] person who owns the data or is putting together the model or the algorithm?

Is it responsible to the end user on whom data is being collected.  How are you figuring that out? 

 

Chloe Autio: [00:05:43] Yeah, that’s a great question. And I’m actually going to draw on a little bit of what I mentioned last time. I think when we think about Responsible AI and that kind of shared responsibility in all those who are contributing making it, right, the same logic sort of applies.  Where if we’re [00:06:00] only thinking about one constituency or one group of people who will feel the impacts of the technology, I think it really does a disservice to everyone else who may feel an impact along the AI lifecycle. 

So when we ask or think about, you know, who, whom is this responsible to, I think the first question is really where is the greatest impact going to be felt? And to figure that out, I always start by asking or thinking about, you know, in which context will this technology, we use or deployed? and who are the [00:06:30] communities and users who might be impacted? So I start there. 

But at the same time, if more people are involved in the design, the development of these systems, thinking about the context in which they’re deployed and who might be impacted, I think more and more pathways or, uh, of impact can come to light.  And, um, it’s really important, I think to explore all of those. So it’s not just the user, maybe it’s that user within their community. Maybe it’s that community within a sub [00:07:00] community. And how might that impact or affect society as a whole? Um, I think there’s a lot of ways that we can think about responsibility and the impact of responsibility, both at sort of the micro and macro level.  And it’s really important to consider all of that.

 

Camille Morhardt: [00:07:14] So that kind of brings me around, I have read a number of things that kind of bring up this concept that you just mentioned of Socially-Responsible AI that’s beyond maybe just the individual on the individual level. Um, [00:07:30] I think maybe that extends even beyond say just avoiding bias if possible. Um, and actually becoming some kind of a force for good. Maybe that’s too extreme a way of saying it. Is that too extreme? or is that kind of, do you see some of it heading in that direction? 

 

Chloe Autio: [00:07:50] And when you say a force for good, you mean like, instead of thinking about the impact of an AI system, creating opportunity for an individual or hungry individual, [00:08:00] if there was a way we could broaden that opportunity and make it a force for good. Is that kind of what you’re asking? 

 

Camille Morhardt: [00:08:07] Is that any kind of, uh, a trend that’s out there? or is it simply kind of a, “Do No Harm” type of, uh, an application? 

 

Chloe Autio: [00:08:16] I think we have to be really careful about this phrase, “do no harm” or “do no evil,” particularly in the context of thinking about Responsible AI.  Just because, you know, with a lot of these technologies, the fact that they’re new and the way that [00:08:30] people are thinking about them is in some context, a little bit new or they’re applying an approach of thinking to a new system or scenario. I think it’s really hard to arrive to the table with this approach of “do no harm” because I think the point of the work and the point of having these conversations is to mitigate the harm and understand in a lot of different ways, um, how we can mitigate the harm.

But I think this sort of overall, you know, “AI is all good” or “AI is [00:09:00] only bad,” or if we apply AI, we’ll all be only good. Or if we have AI, we’ll all be only bad is really kind of a reductionist way to think, because, like I said, I mean, as with any new technology, there is potential for great benefit and potential for some consequence.  And we can just sort of think of those things at the same time, um, even if the goal is to do no evil or create widespread good or however you were thinking about that. I mean, I think part [00:09:30] of this work is really trying to figure out and understand both the good and the evil to make the good, all that much better and the evil, all that much less is that makes sense.

 

Camille Morhardt: [00:09:40] Yeah. No, I think I’m feeling probably a little bit better about it because I, I tend to worry when anybody in technology it starts defining good or good or bad, right. Because those are obviously subjective, in some cases, probably most of humanity would agree on some things. [00:10:00] Um, but there’s a pretty big gray area with a lot of stuff. And so I think we could possibly get ourselves into trouble collectively if we’re trying to define, you know, designing for good. 

 

Chloe Autio: [00:10:13] That’s right. That’s right.

 

Camille Morhardt: [00:10:14] Maybe safer just to realize any technologies got going to have problems and going to have benefits and to be aware.

 

Chloe Autio: [00:10:23] Yeah, exactly. And that’s where, like, coming back to this issue of this topic of context is so important, particularly. [00:10:30] in this discussion around AI is really thinking about which contexts in which this will be deployed and use and understand that those might be different. And the expectations, whether you’re a, the deployer of the technology, the user, the consumer, you know, um, someone in that ecosystem with that user, um, who may be sort of indirectly impacted by the technology.  All those contexts are different and we can’t ever just  plop the technology in a [00:11:00] blanket way or the AI, you know, into these different contexts, thinking that it will all be felt and experienced the same.  Just to your point. You know, I think that same, that issue of context just couldn’t be more important.

 

Camille Morhardt: [00:11:12] I also think you bring up a really interesting idea of thinking of it as a lifecycle and that will, it will affect different people in different communities and different, maybe even not people as it evolves or exists out there. 

Um, [00:11:30] one question is, um, uh, AI is doing a lot of work, kind of organizing and categorizing data for us right now–us being the world and anybody who’s using it. And it’s big migration is going to be to really interpreting that data. On our behalves and providing us with some kind of view of it that maybe we hadn’t thought of before. It seems to me that’s come a little bit more slowly than we had thought in the past, but it will, you know, hit this tipping [00:12:00] point, I think.  And then, you know, I think eventually we all have this vision of it actually, then once it’s done the interpretation actually acting autonomously on our behalves. 

So we’re sort of starting with categorization and going through interpretation and then all the way to taking action based on that. What do you think we’re going to need to think about kind of foremost in the responsibility space as AI makes this transition?

 

Chloe Autio: [00:12:28] You know, there has been a [00:12:30] lot of academic and I think, and really important scholarly work on a lot of these issues. A lot of that work, if I may, you know, and I think just, I, I’m going to mention some of the authors are one author that I think is really, really important in this space. My name is Safiya Noble, a professor at UCLA who wrote the book Algorithms of Oppression, um, which is I think a really critical read for anyone who is starting to think about the role of AI and, and these systems that we’re building with it, right–that are [00:13:00] categorizing, interpreting, and then acting. 

And what’s really important. I think, to think about and remember in terms of the responsibility, is that all of the assumptions that we are relying on and that this AI will be relying on, that will inform this AI or the AI making these decisions, um, all of these assumptions are built upon, you know, systems and structures in society that are totally not technology-related whatsoever. Right. [00:13:30] So institutions, power, uh, systems of, uh, that have sort of racist or white supremacist structures or an underlying structure.  And making sure that we’re thinking about those structures and those systems as we are applying AI or allowing it to categorize, you know, interpret, act will be so important to not preserve some of that historical institutional structural bias that we’ve seen, you know, do things like [00:14:00] perpetuate an inequity, um, create imbalances in opportunity for certain people and communities throughout the world. 

So I think to answer your question more succinctly, when we’re thinking about responsibility in this space, as we move forward, we really need to think about and understand the past and how to make interventions and corrections to some of the structures and systems that have foundations that we, as a society, [00:14:30] aren’t very proud of. Right. So, um, I think, I think that’s the most important thing. 

And then I would say the second thing is making sure that as we’re allowing AI–enabling AI–to make these decisions, or, you know, make these actions part of really when they’re consequential is having a way to make sure that all of these diverse perspectives and knowledge streams are included. So, you know, researchers, uh, people who work in civil society, you know, think tanks, policy, [00:15:00] professionals, business folks, data scientists, ethicists, ethnographers, right? Social scientists, like making sure that people who understand these structures and systems of the past are there and part of the decision making part of the teams, informing this AI and guiding it as it’s making these decisions when we move forward. So understanding the past and making sure  that we have a diverse group of people with a diverse background of experience are involved in, uh, really guiding AI into the future.  I think, I [00:15:30] think that’s really where we need to focus as we move forward.

 

Camille Morhardt: [00:15:32] So it becomes multidisciplinary in a sense, it’s not just a matter of writing the algorithm. Um, so I, I was kind of looking up, uh, a variety of different Fortune 100 companies in different industries, uh, just to see what they were saying about Responsible AI.  And I’m gonna read off of a few of the half a dozen or so different terms. They’re kind of similar. You can see where [00:16:00] there might be overlap. But I’m, I’m guessing that companies choose fairly carefully, which words they use. 

So what I’m hoping is that after I read these off, if you can help us kind of understand why certain word might be chosen over another word, or if this hearkens to kind of some of the conversations that are happening in the policy realm these days. I’m not going to say which company said what word, but these are some of the words that come off. Uh, so they’ll, they’ll talk about Responsible AI, [00:16:30] including, and then the terms would be safety, inclusiveness, transparency, privacy, security, accountability, fairness, trustworthiness, interpretability, explainability, and human-centered. 

I’m interested in why somebody might pick transparency and somebody else might put explainability or [00:17:00] interpretability–which seem possibly a little bit interchangeable, but to me, but I’m sure there’s some nuances there. 

 

Chloe Autio: [00:17:06] Yeah. There are, and this is a really fun exercise by the way. Great idea.  So I think when I think about the difference between explainability and transparency, what is really important is in order for a system to be explainable, it must be transparent in the sense that at every step of the design of that [00:17:30] system, there needs to be clear data.  Understanding clarity around what data was used? Why? Where it came from? Who was building the system?  What was the intended use of the system?  Were there use cases that were restricted for the system? All of this stuff that goes into the design and development throughout the life cycle is what measures [00:18:00] the transparency of that system. And. the level of transparency from my point of view is what is it? It was one informs the explainability. You can’t have the explainability without the transparency. 

And I think that in this specific algorithmic context, you know, we, we use this term explainability to me like, “well, I want to understand or have an example reasoning for every decision that was made by this automated system.” And that’s explainability, right. But [00:18:30] transparency is the, what.  It helps inform why the decision was made, but it goes a little bit beyond that in the sense that, you know, instead of just understanding why a certain outcome was decided or what happened, with transparency, you can, I think better understand the impact of that and all of the decisions and all of the data that inform all of that.

So while it sort of seems like they could totally be interchangeable;  transparency, explainability, what’s the difference? You know, I think one [00:19:00] has to do more with output, right? Can I understand why this happened? and the transparency is everything that informs that, why that is so much more important, I think for not just the end user, but anyone who is interacting with the system and trying to better understand it throughout the time that it was conceived as an idea to when it ended up making decisions or having some kind of output. {Add space] Hope that helps. 

 

Camille Morhardt: [00:19:24] Yeah, I think especially in the case of [00:19:30] AI, um, that helps me because is we don’t know how AI makes its decisions a lot of the time. Yeah. Yeah, I think I’ve used this example in another podcast. Remember we were talking about Artificial Intelligence overall as a definition.  But I have heard that, you know, in identifying or, uh, lung cancer at a very early stage, you put, you know, a bunch of physicians together and then put a bunch of, uh, AI systems together. And all they’re given is [00:20:00] the scan of a lung. And it’s this really, really early. Right. And the only thing we know is which of those lungs developed lung cancer later. 

So the physicians have their way of determining it, what they look for. And then we have AI develop a model where it becomes pretty accurate at deciding, you know, which of the scans is going to develop lung cancer. We don’t actually know how it figures it out. We only know how we figure it out. Uh, so is it [00:20:30] looking at the same thing we are? or is it looking at something completely different that we never thought of?

And so it may be able to explain the result or explain the thing it looked at.  But what you’re you’re saying is, you know, these inputs that you’re putting into the system as a data scientist or parameters that you’re setting up or images or reinforcements that you’re doing during the training, that kind of transparency could help people maybe [00:21:00] understand, I guess, possible biases that could come out or other kinds of things later.

 

Chloe Autio: [00:21:06] Right. And really what, and who is going into the development process? I mean, I think another way to think about this as utility, right. Um, how useful is an explanation? It may not be very useful in some context, or it may be far more useful in some context as opposed to others. And I think when we’re thinking about Responsible AI and particularly the responsibility component, [00:21:30] I think the term transparency is so much more critical because it also has an element of accountability, right? You’re required to think about all of these different elements and things that are going into this lifecycle, this product, the system, and you know, how clear and forthcoming we’re being about those things. I mean, that to me is transparency. And I think it, it helps add to this kind of accountability principle as well, whereas explainability it’s like, well, yeah, Can, can we show or [00:22:00] decide how this happened? It’s kind of a yes or no answer. But when it comes to the transparency, I think there’s some inherent accountability. And in being able to say, “well, yeah, can we understand why we’re not this happened? Sure. But let’s, let’s get beyond that and understand what fed into all of that to get us to that place of explainability.” Right? It’s sort of a layered process. I think. 

 

Camille Morhardt: [00:22:24] That’s very helpful. Thank you very much for joining the podcast. It was great to talk. [00:22:30] Absolutely. Thank you.

And I just want to point out to folks, we have actually kind of a few different topics around Artificial Intelligence.  If you’re interested in other conversations we have a general definition of Artificial Intelligence, which encompasses kind of a range of things at a higher level with Rita Wuhabi. And then we have also an episode on Deep Learning with Rhea Cheravu. Thanks so much, Chloe. 

 

Chloe Autio: Thank you.

[00:23:00] Announcer: [00:23:00] Stay tuned for the next episode of cyber security. Follow @TomMGarrison and Camille @Morhardt on Twitter to continue the conversation.  Thanks for listening. 

Announcer: [00:23:15] The views and opinions expressed are those of the guests and author, and do not necessarily reflect the official policy or position of Intel corporation. 

More From