[00:00:40] Camille Morhardt: Hi, and welcome to this episode of Cyber Security Inside. I’m Camille Morhardt and you have your other cohost here, Tom Garrison. We are happy to be here today with these special episodes of Live from the Green Room, where we are grabbing people out of the middle of the Intel AI Everywhere Conference to chat with them in a bit more detail about what it is they do and what they’re interested in, and anything else we’re curious about right now.
We have Amitai Armon with us. He is Chief Data Scientist for Intel’s internal AI group. Welcome, Amitai.
[00:01:15] Amitai Armon: Thank you. It is great to be here; great to meet you.
[00:01:19] Camille Morhardt: I’m wondering if you can kick us off by explaining what is the role of a chief data scientist for an internal AI group, as opposed to what might be an external AI group?
[00:01:30] Amitai Armon: So I start by explaining what the group does. We are a group of over 200 people who use artificial intelligence methods for Intel. AI is becoming very popular recently–all those models that crunch data and make predictions. But it is mainly used in consumer software, in values games and the values things that we have in Google and Facebook. Right? But what we do is different. We use AI in an industrial setting. We use models that make the machines in the factory smarter, for example; instead of manufacturing each processor the same way, they learn what happened to the process of doing the manufacturing and personalize the manufacturing of each processor. And we don’t just make the machine smarter. We also make the factories smarter. Instead of treating all the machines the same way, they treat them based on what happened to them in the manufacturing so far.
And not only the machines and factories become smarter the processors are also smarter. Instead of behaving the same way in every computer, they adapt themselves to the usage of the computer. So what we do is using AI methods to make Intel’s products and manufacturing that are more efficient, more useful for our customers. That’s the focus of our group.
[00:02:56] Tom Garrison: I have like10 questions queued up after that. Let me start with a mundane one. So you’re the Chief Data Scientist, you got a PhD. Do you focus your time on building the models or are you focused in some other area of AI?
[00:03:17] Amitai Armon: That’s a good question. About a third of our group is data scientists and the other two thirds are split between developers or machine learning engineers–product people. The data scientists focus on building the AI models. The machine learning engineers or developers are people who build the platforms around the models–whatever brings the data, checks the data, monitors the model, when it’s in production, deploys, it, they handle this. And we have the product people who engage with our internal customers learn their needs and try to manage the project and make sure that they work correctly.
As the chief data scientist, I wear several hats. One of them is that I am professionally responsible for all the data scientists in the group. So it means that starting from the hiring, through trainings, advising the projects and publications and patents. So this is one part. And another thing is managing a smaller team that focuses on AI innovation, using cutting edge methods to innovate and solve new challenges and enable breakthroughs in solving our business problems.
A third thing is promoting AI across Intel and also externally in the ecosystem. What you mentioned the AI Everywhere Conference that we now have the same week, we also had an external confidence AI week, which we co-founded with Tel-Aviv University, which is something which has done for the Israeli ecosystem mainly, promoting AI and industry and academy relations. So we do a lot both inside Intel and externally.
[00:05:06] Camille Morhardt: Is there a single quality that you look for in a data scientist? You’re obviously responsible for hiring, as you mentioned; but is there something that you look for that people wouldn’t think of–obviously prowess in building models–but is there something else that you look for?
[00:05:26] Amitai Armon: As you say the basic thing is a passion for data science and modeling and data in general. But the second question is usually whether the person is passionate about building products or about publishing papers. Because many people in our domain, they were educated in academia, they have Masters degrees and PhDs, and then they feel that publishing a paper is the top achievement. We need people who want to build products. Sometimes we do publish papers in top conferences about our products and I also review for conferences like ICLR–the leading conferences in the field–but this is not our focus; that’s the side effect.
I think we need talented researchers who can do the technological breakthroughs but adapt them to reality, to not try to just publish a paper. I would say there are dozens of thousands of papers published in AI every year. Not many people manage to create an impact of $10 million or a $100 million dollars using AI. And our group all together brings an impact of over a billion dollars every year to Intel. So I feel that it’s more of a challenge and more satisfying to me than just publishing a paper at a conference.
[00:06:51] Tom Garrison: I have so many questions about AI. And specifically since you’re a chief data scientist, I just have to ask this question. For example, in the hospital setting, AI is being used to be able to detect things like breast cancer way earlier than the human eye has been able to detect it. And it gets me thinking about how, from a data perspective, do you sift through the moral equivalent of this picture that has all kinds of data points all over it and narrow down to the things that actually matter in a way that probably is counterintuitive to the human being, right? That’s why in the hospital setting, they can find these cancers sooner because they’ve been able to sift through the data in a way that was counterintuitive to the radiologist.
But just in a generic setting, how does a data scientist with AI take the myriad of data that exists out there and figure out how to do something with it?
[00:07:55] Amitai Armon: That’s a good question. Sometimes it’s like magic.
[00:08:01] Tom Garrison: It feels like magic to me. That’s why you can’t go without me asking this question (laughs).
[00:08:06] Amitai Armon: Yeah, of course. So actually AI works differently than humans. The way that AI learns is different. In the hospital setting that you mentioned, usually AI is able to support the physicians and bring them value in pointing out suspicious x-rays and so on. But I don’t think that the AI is already capable of replacing them in deciphering the x-rays. And it will still take a significant amount of time until AI reaches that level; we’re not just approaching it in few years or something like that. It’s still difficult; the human brain still works, learns in a more sophisticated way than AI systems learn.
The advantage of AI systems is mainly that they have the scale or the amount of working memory for processing a large amount of data, and they’re traversing many potential options for processing the data. For example, on a game of chess or Go, you can traverse many, many paths, right? You can check many options with AI and you can give an AI system million examples of x-rays, each with some labeling of whether it was bad or not, and the system is able to learn from it.
The advantage of humans is that they are better in assessing what we call “zero shot learning.” They have a lot of prior knowledge about the world that they studied in advance. So now they don’t need a million examples. They cannot process a million examples, but they also don’t need them. They can learn just from five examples that you have in the textbook about how the x-ray would look and they just understand from that, just from the few examples that they are shown.
So I think that AI, in a sense, compliments people in what it is able to do. In Intel, we believe that AI empowers people. People who use AI are able to do more and focus on what they are good at and what they like to do. Not on the tedious things that AI does better, but on the things that we have advantage in.
[00:10:29] Camille Morhardt: It almost sounds like humans extrapolate from very few examples or from the framework that they’ve put together by living and computers are more like processing or interpreting or distilling down from so much information.
[00:10:46] Amitai Armon: Right. Think for example, about the baby, how does the baby learn? The baby hardly interacts with the world, right? It doesn’t do a lot of actions that do things and have some results. It’s just starting to move and he doesn’t get the millions of examples; he didn’t read all of Wikipedia, all of the internet text which is required for language models like GPT3. He didn’t read all that. But still he’s able to grasp language and start talking, right?
People who practice AI, call this “self-supervised learning” instead of learning from supervised examples in which you have answers to questions, or you have samples with labels, you just learn from the data itself without any labels and you find the labeling the answers in the data kind of. So I’m not sure if I’m able to explain it in full, but the bottom line is that humans still have a learning mechanism which is far better than the learning mechanism of neural networks or other AI models. The human learning mechanism evolved over a billion years of evolution. The last step that distinguished us from other primates is probably just a few million years, but still it took a long time to evolve it and still, we don’t understand how humans learn and AI learns in a much less efficient way, but it still has some advantages.
[00:12:22] Tom Garrison: What do you see as the future? You understand obviously what Intel’s doing, maybe the state of the industry. I n three, five years, what do you think the future looks like?
[00:12:33] Amitai Armon: Yeah. I’ve just asked that in the conference, I interviewed professor Yann LeCun, who is considered one of the AI godfathers. He’s the Chief AI Scientist of Facebook/Meta, and also Turing Award winner for 2018. Back in the 80s, he invented what banks used to read checks. And I asked him what does he think will be the future? Is AI going to be able to do anything that humans do? And he said that it will take probably several decades until this happens. I’m more skeptic than him, because I think that there are some inherent things that AI would not be able to do the same as humans. The robots will not love their children, too, right? The robot will not be hungry. They will not feel that there will be differences between humans and robots in emotions, in understanding other humans and communicating with other humans.
So I think AI is going to continue evolving, continue to solve specific tasks better than humans, but we are not close to having a general intelligence which is able to do everything that humans do and even better than that.
I think that it’s important for us as a society to see how to best leverage these increasingly better tasks that AI does for social good purposes. I think it’s important for people to be educated about AI, right? It’s all around us. It’s approving our credit transactions; it decides what we see on the net, on the web. So it’s important for people to know more about it. I just published today an article about it in one of the Israeli newspapers about the importance of doing more in education for AI. Intel has some initiatives around that in high schools and universities, but I think there should be more. And once we understand it better, I think that people will also be able to increase the usage of AI for social good purposes, like medicine, like other science or other social good purposes. And again, we have several activities like that and Intel as well, and we do lots of volunteering, but I think that more can be done by society in general. And we should learn more to do more for the good of mankind.
[00:15:03] Camille Morhardt: If you’re going to do social good whose defining what is good for society? And are you letting the machine decide that or does that remain always a human decision?
[00:15:13] Amitai Armon: That’s a good question. Currently it’s humans who think what’s social good. I don’t think we’re close to the time where the machines will rule the world or do all of these things themselves. And I’ll just quote what Yann LeCun has told me a couple of days ago in the interview, he said “the smartest people I know don’t want to rule the world; they don’t want to conquer the world.” So the smartest machines will probably also have no desire to conquer the world. They will just play chess or play Go. And, you know, just so what was it? They were thankful. We shouldn’t be afraid of those apocalyptic scenarios of robots waking up and conquering us.
[00:15:56] Tom Garrison: So, no Skynet. You heard it here.
[00:15:57] Amitai Armon: That’s what he said. I asked him.
[00:16:00] Tom Garrison: Yeah, there you go. My last question here has to do with a bit more about the future. What’s stopping us from going even faster, what is the limit for the pace of innovation with AI?
[00:16:16] Amitai Armon: I think that’s one key limit is that we don’t have enough AI professionals. More people should study AI. It’s not a compulsory course, even in computer science degrees in many universities. This is one thing. The other thing is computers. Of course, computers is progressing and bigger and bigger models and bigger computers, but it’s still an obstacle. And the third thing is it takes time to understand the secrets of nature. How should learning really work? Now AI tries to imitate neural networks, but maybe it’s similar to the mechanism of the way we see things, but it’s probably still very different from the way we reason or do some deep logical thinking. It just takes time for science to evolve, I’m not sure we can do this in the one year even if we invest more. So it takes time.
[00:17:11] Camille Morhardt: Do you feel like when you’re working with AI, you’re working with a tool or you’re working with a machine or a tactic, an algorithm; or do you feel like it’s its own entity and you’re learning, you’re trying to figure out how it’s working and how it’s achieving its results also?
[00:17:30] Amitai Armon: No, I don’t think it’s its own entity; we understand the models that we built, it’s not a black box. And also most of what we do at Intel, most of the models are just machine to machine. We don’t handle data of people. We just tell the machines how to treat the processors or tell the factories how to treat the machines. But I think currently AI is comprehensible to data scientists, at least. So we understand what we do at this stage. Maybe later on, it will become too complicated for us, as well (laughs).
[00:18:02] Camille Morhardt: Amitai Armon, thank you so much for letting us grab you away from this AI Everywhere Conference and AI Week Conference that you’ve also been involved in. It’s been really fascinating hearing from the Chief Data Scientist for Intel’s Internal AI Division. Thank you so much.
[00:18:24] Amitai Armon: Thank you for inviting me, a pleasure.