Skip to content
InTechnology Podcast

Our Predictions on AI, Sustainability, Machine Consciousness, and More (132)

In this episode of InTechnology, Camille and Tom reflect on their predictions for the hot topics of 2022 and make new predictions for what conversations will be leading the tech world in 2023. They get into their correctly predicted trend of AI playing a bigger part in computer and cybersecurity, as well as Camille’s conversations on topics like indigenous data sovereignty and AI bias with What That Means guests. Camille and Tom also make predictions for next year’s hot topics on sustainable or “green” software, quantum computing, and machine consciousness.

To find the transcription of this podcast, scroll to the bottom of the page.

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Follow our hosts Tom Garrison @tommgarrison and Camille @morhardt.

Learn more about Intel Cybersecurity and the Intel Compute Life Cycle (CLA).

AI in 2022: Security and Ethics

One trend Tom and Camille accurately predicted at the end of last year for 2022 was how AI would move beyond the hype and play a bigger part in computer and cyber security. In their episodes of InTechnology and What That Means this year, they really got into the intricacies of AI, how it works, and some specific applications of it. Conversation topics around AI included AI for threat detection and security with Google’s Ashwin Ram, indigenous data sovereignty, and AI bias with Intel’s AI Ethics Lead Architect Ria Cheravu.

Predictions for 2023: “Green” Software, Quantum Computing, and Machine Consciousness

Tom and Camille also make their predictions for what conversations will lead the world of technology, sustainability, and security in 2023. Unsurprisingly, they believe there will be a growing intersection of all these topics. This past year alone has already shown how interdependent technology, sustainability, and security already are. Camille predicts that sustainable software, or “green” software, will be a very important topic in sustainability next year, while Tom predicts there will be growing interest in preventative security for quantum computing as well as tackling the many questions about machine consciousness.

Featured Episodes

The previous episodes featured and mentioned in this episode are listed below. Check them out!

  1. What That Means with Camille: Interactive AI (NLP)
  2. What That Means with Camille: Machine Consciousness
  3. What That Means with Camille: Scaling AI at the Edge
  4. What That Means with Camille: How Robots Learn
  5. Yes, No, and Everything in Between: Quantum Computing and Security
  6. Live From the Greenroom – Ethics in AI: Who Decides?
  7. What That Means with Camille: Indigenous Data Sovereignty

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

[00:00:12] Tom Garrison:  Hi and welcome to InTechnology. I’m your host Tom Garrison.  And I’m joined by my co-host Camille Morhardt.  And Camille, I’ve been in a reflective mood lately.  I guess that’s a feeling that hits most of us at the end of the year.  But I was thinking back on an article you and I wrote just about a year ago, anticipating the trends of 2022.

[00:00:34] Camille Morhardt:  In the tech world, that can be a tricky proposition.

[00:00:36] Tom Garrison:  Yeah exactly(laughs).  But it’s also a useful exercise to see where we’ve come from and where we’re headed.  And I’d like to do that in this episode of InTechology.  And as far as the looking back part goes, on at least one 2022 trend we nailed it.  And that was predicting that artificial intelligence or AI would move beyond hype and play a bigger part in computing and cyber security.

[00:01:01] Camille Morhardt:  It’s good to know that the crystal ball was working in one respect.  Although I have to say I think it’s cheating a little bit to predict that AI is gonna be playing a bigger role.  I think that we ended up having a lot of conversations that got into the intricacies of AI and how it works and broke it down, rather than this monolithic term that encompasses almost absolutely everything, right now.  We looked at specific things and applications of it.  And one of those is I remember speaking with our colleague Rita Wouhaybi about how her team here at Intel worked in a partnership with Audi to look at improving manufacturing processes in their factories.

[00:01:42] Tom Garrison: Yeah, I remember that.  And you are right, it’s kind of unfair to say we nailed AI like we were the first ones to really call that one (Camille laughs).  But we did nail a few things with regards to the details. And to your point, we’ve spoken with a number of researchers in this area around AI for threat detection and security.  And, you know, one episode that stuck with me was when you spoke with Ashwin Ram from Google about what people call Interactive AI.  Ashwin is the Director of AI in the office of the CTO at Google. His work focuses on Natural Language Processing–which for those unfamiliar with the term, is basically teaching AI to mimic conversation and language use.  A good example of this is how text apps on your cell phone and other programs can predict what we might want to type.  Probably all experienced this where you talk and all of a sudden your computer fills in what you were about to say. Well that’s what we’re talking about here.  And I think that topic on it’s own is interesting, but what especially caught my attention in your conversation was that a lot of AI systems interact to make this happen.

[00:02:50] Camille Morhardt: Yeah, it’s true.  So teams like Ashwin’s are feeding a bunch of different AI systems with data from books, videos, phone conversations.  And of course it extends beyond language.  And it goes to gathering information from smart home devices which can even figure out which light to turn on or off based on what room you’re in or what time of day it is.  Similarly with autonomous cars having sensors that can provide the ability to make a decision.  A lot of those adjustments around the edges are customizations around how you would reply, remain private and specific to your own phone and customized to you, but of course they’re also sending back learnings to a central model.  And some of these topics really get into that intersection and delicate balance of privacy and utility.  And so he talks about that.

[00:03:40] Ashwin Ram: So that’s an example of a larger problem that sometimes is called a “filter bubble.” When you read news, for example, on a newsfeed online, when you listen to radio, and typing on the phone and other things, these personalization models get better and better at modeling you, they get also better and better at filtering out things that you wouldn’t want to see, but in doing so they’re also restricting and in some sense, narrowing you and into a filter bubble. You’re living in a little bubble world of your own, where there’s very little peripheral vision into what, what else is going on.

So to avoid that algorithms need to be designed in ways that do allow a little bit of what in machine learning we call exploration, in addition to exploitation–building on what we already know about you, and so following tried and tested route, you also want to be experimenting a little bit and exploring other alternatives.

How much you explore vs. exploit depends on the use case. If you’re typing, uh, and your job is to get this thing typed and move onto other things, it’s not important enough, maybe more expectation is fine, once in a while you might type something different.  But most of the time it’s right to just move on to something else. If you are reading news, you sure as hell do want a broader viewpoint because otherwise we just end up with more and more segmented viewpoints of people that never talk to each other. People have a confirmation bias. They like to read what they already believe. So depending on the application, we can tweak these trade-offs and give you the kind of broader worldview that you would like while still helping you expeditiously on the path that you probably are going to take.

[00:05:27] Tom Garrison:  That was Ashwin Ram from Google.  And you know Camille, what Ashwin is talking about is obviously more consumer-based.  But “filter bubbles” are an issue for companies using AI, and it relates to the issue of bias.  And that’s something AI data scientists have told us they’re trying to address.  They’re thinking more about what data they’re gathering and from who, what conclusions the AI is reaching and could there be bias there.

[00:05:55] Camille Morhardt:  Yeah, and we’ve actually delved into some leading edge conversations with respect to bias.  One of them was a conversation on Indigenous data sovereignty that I really think everybody should take a listen to.  And another is, I spoke with Ria Cheravu after she landed at the keynote at the Intel Innovation Summit in September.  She’s 18 years-old and she’s Intel’s AI Ethics Lead Architect, so very impressive person.  And in fact our CEO introduced her as, I think, the future CEO.  But I asked her about a lot of topics in AI, but also whether discussions about bias are having a real-world impact.

[00:06:37] Ria Cheruvu: With AI, we have a sense of the different overarching disciplines within AI. Reinforcement learning, supervised learning, unsupervised, right? We’re able to categorize it fairly nicely. For responsible AI and AI ethics, we’re just getting there. We do want to have hierarchies of prioritization and levels with which we use to decide what AI model needs to have more stringent ethical AI guardrails we can put it as compared to another. And that is really based off of risks and harms in terms of analyzing the ethical implications of the system on society. Then again, in and of itself, those methodologies and the definitions and frameworks that we use to figure that out, that’s still under debate. If you use one metric or definition in order to, for example, identify fairness or bias of a system, you could accidentally–if you’re optimizing for that fairness metric–accidentally exacerbate another. So you start to see a lot of different metrics that you need to look at, some of which may not be relevant at all and you have to tailor it accordingly.

But putting aside those problems, yes, there is definitely a prioritization level or a risk level. I think personally the European Commission’s proposal on AI does a great job of doing the categorization.  Having that delineation based on the use case of AI systems and how it builds up over time, that’s definitely very useful. For example, AI being used for determining access to employment or to education definitely has very, very big ethical implications and probably should be constrained very much. Whereas if we see the use case of AI in games or for Instagram filters, you probably don’t need that much of a constrained AI in healthcare, we can start to think about the different obligations that we might need for chatbots or similar types of use cases. And definitely they have their own risks and harms associated with them. We want to treat them differently depending on the types of implications and harms that they can bring up.

[00:08:20] Camille Morhardt: That was Ria Cheravu, AI Ethics Lead Architect at Intel.

[00:08:32] Tom Garrison:  Well, Camille, we’ve revelled in our trend prediction success so far on this episode, but I’d like to switch gears and look forward to what might be in store for 2023.  Do you have anything in technology, security or sustainability that you’re keeping an eye on for next year?

[00:08:50] Camille Morhardt:  Yeah, actually, what I’m really interested in is it seems like, as we’ve progressed through the year, just more and more commonly those three topics turn out to be completely interdependent.  Very difficult to separate them, ultimately.  I think as some of our regular listeners have noticed, we’ve actually specifically added sustainability and technology) to the mix of topics on the podcast.  So we’re really exploring that intersection.  And one of the things we talked about is this movement around sustainable software or sometimes known as “green” software.

[00:09:26] Tom Garrison: When you say “green” as it relates to software, what exactly does that mean?  I’m familiar with the hardware and the chip world, and I know part of the focus there has been on using less energy, to do the same amount of work at the same rate.  Is that what you mean?

[00:09:42] Camille Morhardt:  Well to be honest, I wasn’t 100% sure what it meant, either, because software in and of itself is not a physical product, so I was like, how does it go green?  And I also had some guesses.  So I reached out to Asim Hussain who is Director of Green Software Engineering at Intel.  He’s also the co-founder of Green Software Foundation.  And I asked him to define the term. And we will be airing that “What Than Means” episode in the next few months, so here’s a little preview.

[00:10:12] Asim Hussain: So there’s multiple different ways you can think about being green when it comes to software. One way you can think about it is building software to make the world more sustainable. For instance, you could build software, which does farming in a more environmentally good way, or you can acknowledge a software itself is an emitter of carbon emissions into the atmosphere and how do you actually reduce the emissions that software itself is responsible for? And that’s how we define green software, a software which really takes responsibility for its own emissions and tries to minimize that or eliminate as much of that as possible.

[00:10:50] Tom Garrison: Emission, huh… that’s interesting.  And I guess, considering all the software we each use in our daily lives, changes like green software could have a huge impact.

[00:11:01] Camille Morhardt:  Asim did give me some context for that.  For example, he’s saying, if we look at software used on machines that are running on electricity, for example, he’s saying we’ve got 80% of electricity created in the world by burning coal.  So any kind of reduction that you can create in the use of electricity is gonna help reduce production of that electricity.  So I am interested to see in the coming year what kind of sustainable software or “green software” comes around.

[00:11:34] Tom Garrison: That’s a good one, Camille.  I’m gonna be looking forward to that one.  Ok… well for my 2023 prediction I’m gonna put my neck out there and say, I think quantum computing is something to keep an eye on.

[00:11:48] Camille Morhardt:  I’m gonna challenge you Tom because I think that sounds like saying AI is moving past the hype, sort of (laughs).

[00:11:56] Tom Garrison:  You know I’m really big on being right on these prediction shows.  So yeah.

[00:12:01] Camille Morhardt:  Just broaden that focus.

[00:12:03] Tom Garrison:  It’s so simple to call.  But I have to clarify what I’m saying.  It’s not because I think we’re going to have quantum computers next year.  Great minds are working on creating computers that can solve what we currently can’t solve, and doing it in less time.  But most predict we’re still years away on that.  What I’m thinking about is quantum attacks.

[00:12:26] Camille Morhardt:  Right.  So if someone designs a quantum computer that can figure out how to decipher encryption and access all of our sensitive data that’s currently secure protocols.

[00:12:36] Tom Garrison:  Exactly.  Now, people may be saying “why worry about something that is years away?”  Well, to that, I’ll get on my cyber security soapbox and say you can’t undo an attack.  Once it happens you’re stuck.  And I’m not the only one that feels that way, particularly when it comes to quantum computing.  Earlier this year we spoke with Michele Mosca, co-founder, President and CEO of evolutionQ.  I asked him about this issue of don’t-worry-about-til-it-comes and here’s what he had to say.

[00:13:12] Michele Mosca: That’s not the right analysis. At the very least, you need some mechanism for updating of the cryptography to be resilient to these emerging quantum attacks. And really so do I need to worry? In most cases, the answer now is yes. That doesn’t mean panic. It doesn’t mean you have to deploy something. You have to ship the crypto tomorrow, but it means you better be well on your way of those four stages to quantum readiness, which is understand what it means. And then the second one is what does it mean to you? The third phase is plan, right? And the fourth phase is deployment. That’s when you’re shipping new product, which has these quantum resistant methods baked in. I think with any moderately important system, you really have to be well on your way and entering that third phase of planning and readiness. 

And furthermore, a lot of the hacking we see today, people are exploiting generic software bugs, but if you mess up the cryptography, that’s a really bad piece of software to mess up. If you rush that out the door, you don’t need some sophisticated criminal service that hacks into quantum computers, mundane attack vectors can get in. That is perhaps at least as worrisome as the risk of quantum-enabled attacks.

[00:14:24] Camille Morhardt:  Again, that was Michele Mosca, CEO of evolutionQ.  So Tom, is your hope that this issue gets on more people’s radars in 2023?

[00:14:34] Tom Garrison:  You. I think we’ll be hearing more in the coming year about innovations and advances in quantum.  So I expect that, and along with that, discussions and maybe even some tools to address security.

[00:14:48] Camille Morhardt:  I’d also like to add that if people want to learn more about quantum computing in general, because it is a term that gets thrown around a lot that people don’t the time to define, and describe the implications, this was a great episode to get up to speed on the topic.  We’ll put links to the episodes we’ve highlighted today in the show notes.

[00:15:10] Tom Garrison:  Yeah, there was a lot of conversation about forests and trees and other things that weren’t technical in any way but they help you understand the concept of quantum computing.  You know, and by the way, while we’re talking about it, we didn’t have it here in this clip show, but I did want to highlight what I thought to be—at the end of the year here—what was the coolest episode of the entire year.  And that was the interview you did Camille with Joscha Bach. And you both talked about machine consciousness.  And I just want to throw that out there to all the listeners.  The idea of machines having consciousness–or not having consciousness in this case—what were the ethical things that would arise out of that whole dilemma.

[00:16:04] Camille Morhardt:  Subsequent to that interview—well we just mentioned it in this episode—I spoke with Ria Cheravu on AI and ethics and I asked her about her perspective on machine consciousness; and I also asked Yulia Sandamirskaya who is a colleague and peer of Joscha Bach’s and she’s also in Intel Labs doing robotics and I asked her her perspective of machine consciousness.  They each have a little bit different perspective on it;  they all have an interesting approach to how you would think about it. Right?  None of them says “ok, this an absolute no way” or “this is an absolute yes.”  Each one comes with an interesting perspective of how would you come to determine that, right?  It’s not about proving it or disproving it in the very specific case.

[00:16:53] Tom Garrison:  Right.

[00:16:54] Camille Morhardt:  It’s like we have to think about things like that.  It’s just a crazy concept that we actually have to think about for real.

[00:17:01] Tom Garrison: Which is why I think it was such a fascinating episode because it’s not just a simple black and white, right or wrong.  It’s not at all clear how you would even know.   And how would you go about trying to answer that question.  So just wanted to put a plug in there and I thought you did a great job on it, so there you go.   

[00:17:20] Camille Morhardt:  Thank you Tom.

[00:17:22] Tom Garrison: You bet.  Alright Camille, so bringing it back to the predictions that we have for this show, we’ve put our flags in the ground with green software and quantum computing, and I’ll add to that now machine consciousness.  And I guess we’ll just have to wait ‘til next year to see if we’re right.

[00:17:41] Camille Morhardt: Yeah, and I think we’ll just continue rolling out the conversations on getting into the details of AI, and some of the intersection points.  So we have stuff coming up on deep fakes, static diffusion, synthetic data; and then we’re going to be looking at this intersection of, for example, sustainability and AI.  So one thing is looking at how we use AI to locate mineral deposits that can be used for batteries we need for electric cars and things like that.

[00:18:11] Tom Garrison:  Yeah, there’s so many incredible topics that we’re gonna be bringing in next year.  I think it’s super, super exciting.  So, thank you listeners for tuning in every week and we wish you all a happy and healthy 2023!

More From