Skip to content
InTechnology Podcast

#8 – What That Means with Camille: Artificial Intelligence

In our first ever episode of What That Means — the Cliff’s Notes companion to the Cybersecurity Inside podcast — Camille is tasking Rita Wouhaybi, Principal Engineer for Industrial Solutions in the IoT group at Intel, with defining Artificial Intelligence in under three minutes.

(Spoiler: she nails it.)


Plus, Camille and Rita cover:


  • The Turing Test + how we measure intelligence in a computer or machine
  • Explainable AI/Biases in learning
  • The questions we should be asking as consumers and/or implementors of AI
  • Deciding what AI techniques to use and what to use them for
  • The confidence levels of AI
  • The one thing to keep in mind about AI
  • Why AI is not going to solve all our problems
  • What AI competition is doing for the industry

Check it out!

Here are some key take-aways:

  • If you feed AI bias, it’s going to spit out bias.
  • AI is not definitive. Every answer that AI gives you is going to have a confidence level.
  • AI is not going to solve all your problems. So pick the problem that makes the most sense.

Some interesting quotes from today’s episode:


“It’s based on some cognitive ideas, where you see information, or actually you see more like data, raw data, and you distill information out of it. And as humans, as well as animals, we do that all the time. So it’s the idea of creating a computer program that is capable of doing it.”

“I would even argue that to a large extent, when you have a child growing in a biased environment, that child will be biased as a child. And it’s going to take them to go out of that environment and expand their horizon — either through reading or experiencing other individuals — to widen that scope and get rid of that bias and reexamine it. And I think that could happen in AI, too.”


AI is never 100% sure. The trick is, where is your tolerance? Do you want AI to make sure that if it sees something bad, to tell you about it, with the assumption that some of those might actually be good? Or the opposite? Which one matters more? So, if you are a medical doctor, would you rather have an AI that says, ‘Oh, I think this one has lung cancer’ higher and ask for further testing, or miss a few lung cancer diagnoses? Where do you want that error to wiggle? Do you want it to wiggle on crying wolf? Or do you want it to be very conservative and miss some diagnoses? Those are very important questions.”

Share on social:


Narrator: Welcome to What That Means with Camille, companion episodes to the Cyber Security Inside podcast. In this series, Camille asks top technical experts to explain, in plain English, commonly used terms in their field, then dives deeper, giving you insights into the hottest topics and arguments they face. Get the definition directly from those who are defining it. Now, here is Camille Morhardt.
Camille: Hi, and thanks for joining me today to figure out What That Means. Each episode I’ll be joined by a guest expert to explore security and technology terminology. I’ll start each episode by asking our guests to define our topic in under three minutes, then we’ll dive a little deeper.

My guest today is Rita Wouhaybi. She’s a Principal Engineer for Industrial Solutions in the IOT group at Intel. She’s a double-E PhD from Columbia University and has spent the last two decades focusing on AI and Machine Learning, as well as peer-to-peer and distributed networks and game theory. Rita has filed over 290 patents and published over 20 papers in I Tripoli and ACM conferences and journals.
Rita, do you think you can define Artificial Intelligence in under three minutes?
Rita: All right. Well, artificial intelligence or AI is a branch of computer science that focuses on teaching computers how to learn from observations. And it does that by mimicking natural intelligence. So this is a very interesting new concept. Well, no longer new. Used to be new decades ago, because before that we really wrote a program in computer science that did one thing and could only do that one thing as good as we wrote it. What was really cool about AI is that AI can learn. We just create a program that can observe and learn from observation.
Camille: Okay. So basically intelligence we’re defining not as quantitative–that’s why we invented calculators and computer in the first place to process faster than we can–but to match this qualitative aspect and subjective intelligence that humans have.
Rita: It’s based on some cognitive ideas where you see information or actually you see more like data, raw data and you distill information out of it. And as humans and as well as animals, we do that all the time. So it’s the idea of creating a computer program that is capable of doing it.
Camille: So putting together a framework or a pattern, and then applying things to it, old things and new things?
Rita: Exactly. It’s about feeding it a lot of data and then looking for patterns in this data. Sometimes this data you’ll hear the word labels. It is labeled, meaning a picture off your cat has the word “cat” associated with it as well, and so on and so forth. And sometimes it’s not, and there are different techniques for either.
Camille: So, this is what’s happening when I try to log into an account and it says, “please poke all of the pictures on here that looked like a traffic signal” and I pushed them all and then it gives me 16 more pictures and I have to do it again. It’s basically using me as an unpaid, the labeler?
Rita: Yeah. That’s not a hundred percent AI (laughs) but yes, yes you are.

Camille: And now, let’s dive a little deeper. Okay. Let’s back it up a little bit, Rita. So first of all, we’ve been defining artificial intelligence, which sort of requires we define intelligence. How are you measuring intelligence in a machine or a computer?
Rita: Yeah. You know, that’s a very, very good question. And I think the brightest in the industry have been asking themselves this question for a while. There is a very old definition that was, uh, put together by Alan Turing who’s a mathematician and computer scientist died long time ago and has contributed a lot to the industry. It’s called the Turing Test. And LNG rank said, If I have a person walk through a room and he, or she knows that there are two entities behind a curtain. One of them is a computer and one of them is a human. And this person who walked into the room, asks questions and gets answers from behind the curtain and doesn’t know if these answers are coming from the human or the computer. And by looking at those answers cannot determine which one the computer answered versus the human, then we have created intelligence.
And I think that is so cool as a definition because it really is not about, crunching numbers and hard mathematical equations. It’s really talking about the fact that intelligence is a human and animal characteristic that we’re trying to capture into machine and hence requires a different way of thinking on how do we measure it.
Camille: Right. We already know that machines are better at quantitative, but now we’re asking when it comes to something subjective or qualitative, are they able to do it? Okay. So that brings us now to this next question about, um, explainable AI or, biases and learning. Can you talk about how that happens?
Rita: Yeah. So there are actually many, many controversies in AI. And I think, again, it comes back from the fact that it’s not black or white. AI is really trying to mimic human nature and animal nature to mimic intelligence and it’s doing that by looking at data.
So if your data is flawed, you’re learning things that are flawed, so this is for bias as well as a lot of things–ethical questions related to AI. If I only show you one kind of cat and one kind of dog or human or social issues, then that’s all you’re going to learn. And it’s not going to be easy for you. It’s going to take a learning curve to extrapolate from that data into something else that you haven’t seen before,
Camille: Are you saying there’s no morals?
Rita: Well, it’s not about morals. It’s about a term–unfortunately I don’t like this term a lot, but it’s very accurate–at terms that we have in computer science that says “garbage in, garbage out.” If you read in bias, it’s gonna spit out bias.
And I would even argue that to a large extent when you have for the child growing in a biased environment, that child will be valued as a child, and it’s going to take them to go out of that environment and expand their horizon either through reading or experiencing other individuals to widen that scope and get rid of that bias and re-examine it. And I think that could happen in AI too.
Camille: The computer’s mimicking what you’re feeding it, or you’re telling it, “this is a cat,” or you’re saying “these are these to these people. We deny loans. Okay. Now the computer’s learning and so now it’s going to deny loans to those people.” You find out, “Whoops, we actually had unintended bias fed into that model.” Is there a way to correct that or extrapolate other than, you know, removing all bias, which I’m not even sure is feasible as a human feeding data.
Rita: Yeah. So that’s an active research area, as we like to say. There is a lot of techniques that we can apply, looking at the data and inspecting the data for bias. So ironically, we can create an AI model to look for bias as well as just the way we create computers that are more secure, there are different techniques and one of them is trying to launch an attack on the computer. Right. We hire someone say, “Hey, can you attack it?” I think we should do the same.
We should collect a bunch of data and hire the brightest and the smartest, not just to develop the algo (algorithm), but to see if the algo, you know, has all the values that we would hope and expect from it. And does it have bias? Can you test it? Can you crash it and make it act biased so that we can find the issues and fix them?
Camille: There’s another way, I mean, I heard this term explainable AI. Is this a means of looking at how it’s making a decision to try to discern whether or not it has bias?
Rita: Well, explainable AI and bias are two different things. And of course, you know, they can come together, like you just said. But they are also two separate things.
If you have a child and that child you’ve been training them to learn that certain areas are hot versus cold so that they don’t get hurt and then they go and touch something that is very hot, they learn immediately that they shouldn’t touch it and touch it anymore.
Now, why did they touch it? Do you ever ask yourself that? I think as humans, we don’t, you know, stop a toddler and say ”why did you touch that stovetop? You shouldn’t have touched it! So that’s what we’re asking AI to do with explainable. AI is saying when you make a mistake, why did you make that mistake? Probably because we funded some wrong data, but sometimes we’re feeding it millions of data points, and it’s really hard to go back and look at the explainability.
Now, you’re absolutely right. Sometimes we don’t know that bias is happening because it’s hard to track what’s happening in that AI that we have created. It created its own thing, and it’s very difficult to track what is happening inside it. So again, this is another really active area in academia and industry to look for solutions and way to mitigate.
Camille: Well, one of the things that I’ve heard is like, “we don’t even know.” So if you set up, um, let’s say we have a whole bunch of radiologists who are well-trained doctors, who’ve had, you know, dozens of years of experience. Um, and then you have a machine looking for, let’s say, early stage lung cancer. And sometimes the machines are finding it. I think so far, maybe we’ve found the best is a combination or a hybrid between humans and machines. But that we don’t even know how the computer is finding it. We don’t know if they’re looking at a certain pixel or if they’re what they’re measuring. We know how we train ourselves to find it, but we don’t know what it’s doing.
We’re only showing it results. Did you, did you detect it? Yes or no? We actually don’t know how it’s doing that.
Rita: Correct. So it is actually finding patterns that we have failed to find over years of training. Which is fascinating, right? Because I see the same in industrial. I had the pleasure of working with a customer recently and they had a domain expert to spend four years optimizing the certain process. And we came in and within six months we wrote a piece of AI that outdid him, and he was really humble and really impressed with the fact that the AI was doing a higher accuracy than him. But we couldn’t tell him why (laughs). It was fascinating.
However, I want to go back to a point you said that we trained certain individuals and experts to find them based on a process. I’m not a hundred percent sure that is accurate because sometimes domain experts also develop a gut feeling. I remember working with this particular domain expert–it’s not in the health field obviously–but it’s also in the super structured environment, which is factories. And I asked him like, “what do you think about this plot?” And he’s like, “yeah, it’s not good.” I’m like, “Why?” and he’s like, “I don’t know. It just doesn’t look good to me.” (So not good meaning it’s faulty.
So we are, even, as humans are not always able to explain our answers. Not just that, our answers are not almost never black and white. Right? If you ask me something right now, I’ll give you an answer. And you know, in the back of my head, I’m thinking,” yeah, I think of 90% competent that this is true.” And so is AI by the way.
Camille: Right. Very interesting. So what, uh, what kinds of things should people be asking right now. What are sort of the top questions if you’re meeting with somebody who, and you’re going to implement AI, maybe in your business or you’re a consumer and you realize, you know, your phone is listening to your conversation, what should we be thinking about or worrying about or concerned about or interested in?
Rita: I want to dissect this question into multiple things. I think what we should be interested in or asking our results is just like 40 or 50 years ago ee always thought that computer science and everything related to computers is going to be just a job for a bunch of geeks who are going to major in it. And now fast forward, we’re finding almost every college graduate is taking some form of programming or some form of computer science.
I think we better wake up and smell the coffee and start getting trained on what AI is. The more we know about it, the more we’re comfortable and, uh, you know, less nervous.
Um, issues? There are a lot of issues. It’s a very new, and I know it has started solving a lot of problems. Hence, many people are putting so much on it, but keep that in mind. It’s really, not an infant, but I would say it’s a toddler. So it’s going to trip a few times. And then, um, the more we engage, the more we can define it.
Now, if you were in an enterprise and you want to use AI to solve a problem, educate yourself about the AI, a little bit of the AI jargon. What is accuracy? versus, precision versus recall? What do these mean? These are trade offs—tubes and knobs that you can turn around to decide what AI technique you want to use and for what. So educate yourself about some of this jargon so that you can ask the right questions, I think is very, very important.
Camille: Well, what were those things you just said? Precision versus accuracy, and educate us quickly on those pieces of jargon.

Rita: I actually want to keep accuracy and precision together because I think there are those nuances aren’t going to matter as much, but talk about recall, right? Accuracy versus recall. Remember every answer that AI is going to spit out at you is going to have a confidence level. Keep that in mind. AI is never a hundred percent sure. The trick is where is your tolerance? Do you want AI to make sure that if it sees something bad to tell you about it, with the assumption that some of those might actually be good? or the opposite? Which one matters more, right?
So if you are a medical doctor, would you rather have an AI that says, “Oh, I think this one has lung cancer higher and ask for further testing or miss few lung cancer diagnoses? Where do you want that error to wiggle? Do you want it to wiggle on crying wolf? or you want it to be very conservative and miss some diagnosis? Those are really important questions. And I often, when I work with customers who have not had a lot of experience in AI, this is where we spend a lot of our time.
Do you want me to tell you these are bad, bad, bad, bad, bad products, even though I’m not a hundred percent sure–my accuracy isn’t that high? Because we really never want to send a bad product to the market. If you’re a car manufacturer, by the way, yes, yes, your answer is, “Yes, please do that!” Or do you want me to be more forgiving because your process is overdoing something, which is fine. You’re accounting for it there.
Camille: Are you going to make that differentiation? I mean, clearly medical, that, that kind of helps, but we all understand what we would rather know. But is it a software versus a hardware difference or how long, some things out in the market, whether you can correct for it later? How should people base those decisions?
Rita: They should base those decisions based on their use cases. Both software and hardware are able to meet those trade-offs, typically. And there are some exceptions where you don’t have enough data to make that tradeoff. But most of the time, these are tradeoffs. These are turning knobs that you’re doing with the data scientists, that you’re doing with your developer.
Camille: If you were going to tell, um, CEO of a Fortune 100 company one thing let’s say they’re about to walk into a board meeting and the topic is going to be Artificial Intelligence. What is the one thing that they should really be keeping in mind?
Rita: I don’t like to be negative, however, the one thing they should keep in mind–because I often feel like AI these days is over-hyped–just keep in mind that AI is not going to solve all your problems. So pick the problem that makes the most sense. Pick the problem that is really a pain point for you, for AI to solve as well as you have enough data, um, so that people can deliver a good solution for you. And keep in mind that it’s not what we call a “deterministic solution.” It’s always going to have a confidence level with every answer it gives you.
Camille: Okay. So what is everybody agreeing on right now when it comes to AI? and are there sort of major divergences of opinion? (I don’t know if that’s a word or if I just made it up). But where we’re at the academic and industry conferences, are we seeing disagreement and different solution paths,
Rita: I don’t know that we’re seeing disagreement. I think there are a lot of different techniques. There are people who are locked into different techniques and feel that one technique is going to take over the world. Um, you know, whether it’s that technique is, um, deep learning or some other technologies. Um, we, we have seen these techniques by the way, come and go. Some of them are inspired by nature. Some of them are inspired by in urology. Some of them are inspired by physical things or, uh, social behavior.
I think this is where you see a lot of disagreements, a lot of like “I’m going to write a paper and show you how much my algos is to all perform your algo.” But the good news is that this competition is super healthy and it’s creating a lot of innovation and it’s, it’s making AI such a fun area to be in, especially if you want to innovate and take advantage of it.
I think the one thing that everyone agrees on–and I’m glad that they do–is that AI is here to stay. This is not a hype anymore, right? And, uh, sure, there are some areas that are still hard. There are some open problems, but it’s really here to stay.
Camille: So, um, final thing for you: 290 patents is a lot. Is that more or fewer than the number of tomato varieties you grew this year?
Rita: (laughs) It’s definitely more and I don’t consider it a lot. There is no such a thing.
Camille: So, um, can you, can you tell me about your, um, you’ve brought with you, uh, from the old world, some number of heirloom tomato seeds. How did this transpire and how many rows of tomatoes, how many plants did you do this year?
Rita: Uh, I think this year I worked a little conservative. I think I just stayed with 90 (laughs).
Camille: 90.
Rita: Yeah. I’ve had over 140 in the past. Uh, this is a fun activity. This is how you, I get my hands dirty and this is how I learned something new. I challenged myself.
Camille: Well, thank you for the starts.
Rita: (laughs) Sure thing.
Camille: Thanks for joining us today on What That Means. We’ll dissect more terms in the weeks ahead. For more discussions about technology and security, be sure and catch the next episode of Cyber Security Inside.

Subscribe and stay tuned for the next episode of Cyber Security Inside. Follow @TomMGarrison on Twitter to continue the conversation. Thank you for listening.

More From