Camille Morhardt 00:28
I’m Camille Morhardt, host of InTechnology podcast and today I’m going to talk with the CTO of Vianai, Navin Budhiraja. Welcome to the podcast, Navin.
Navin Budhiraja 00:40
Hi Camille. Thanks for having me.
Camille Morhardt 00:41
Navin has a PhD in computer science from Cornell and also graduated from the IIT at Kanpur, Indian Institute of Technology. So we’re going to talk about the most important unrecognized trends in AI today. And undiscussed topics in AI, I think what’s happening right now, across kind of all media is we’re hearing a lot about generative AI, we’re hearing a lot about large language models, Artificial General Intelligence. I’ve kind of wondering your perspective on what else? Are we now so focused on these items that were missing other important things that are happening right now?
Navin Budhiraja 01:23
Yeah, those are really important conversation. So I think there are a few other topics, and the one that I think really stands out, and many of the things that we work on at Vianai, is this challenge we have with AI skillset, the lack of it, particularly in the large companies, enterprises that we work with. And this has been happening, as you might know, in the high tech world for some time. But the AI really kind of brings it to the forefront.
There are actually two things going on here. One, this is an expertise that is in demand, and but there is not much available. So one typically needs to pay a premium to get that. And guess what companies are able to pay that premium? The large tech companies, who already have many of the best folks, right. So this creates this challenge with others who need to get the same talent to actually make that AI relevant in their companies for the use cases. But there is a more fundamental thing that’s going on and I think actually it has been going on for some time now. But again, comes out very starkly with AI, especially in universities, where a lot of this talent needs to be trained. The amount of money and resources that is now needed to do leading edge work in AI is really, really hard for even the best universities to actually get a handle on. So a lot of this work and the training that needs to happen, is now happening, again, back in the same companies, right.
So we have this dual issue going on with talent that actually President Biden in his EO that came out a couple of weeks back actually mentioned this fact, as one of the challenges and some of the solutions around having national infrastructure. So that actually made accessible for training universities focus more on bringing talent from outside the US allowing continuation laws to actually support that. So I think that’s one area that’s I think it’s really something that needs to get a lot of attention.
Camille Morhardt 03:32
With that I suppose this conversation about democratization of AI, or the concept that you need access to infrastructure as well as data and perhaps even then we would need to have clean data or traceable data–data where we understand the provenance of it. How do you think that we’re going to be able to achieve that? And I’ll also just asking you, do you think open source is going to be playing a role in this?
Navin Budhiraja 03:59
I hope open source continues to play a role for multiple reasons, right? I think if you go back and look at history of some of the key technologies that are out there that have a fundamental role to play in actually making some of these new capabilities available to a broader audience, across geographies across different socio economic backgrounds–things like the internet–that is open. Things like dominant server operating system, which is Linux, which is open. And last but not the least the dominant mobile operating system, which happens to be Android. And it’s open as well. And the really interesting fact is that all of these are built in a slightly different way. Internet with the help of the federal government. Linux started in Berkeley but again, organic through a community of developers and Android through Google, right.
So it is possible to actually build these technologies, these fundamental technologies in the open, and I think absolutely the same needs to happen. But it is becoming a little difficult to understand exactly how to do that. And I think the reasons are complex. One is, as you mentioned, who can do this, because of the challenges with infrastructure. But again, as we say, we have examples with a lot of open source not coming out of the big companies. So that needs to continue with the questions that are asked or somewhat need to be still answered, given the challenges and some of the safety issues that have been highlighted about this technology; the possibility, like any other tool that if it gets in the hands of people who shouldn’t have access to it–this could be nation states, this could be individuals–how do we manage that? And so that is definitely a question that I think there’s big discussions happening around that. But I do think, if we need to make this available broadly, in the right manner, to the largest number of people–you mentioned, optimization, if that needs to happen–I think open and open source will have to play a big role, it cannot be locked up in a few big tech companies.
Camille Morhardt 06:19
And your CEO of Vianai. Vishal Sikka, has said, he thinks the danger with artificial intelligence isn’t so much the intelligence level of AI, but that he says extraordinarily harmful things can be done very, very quickly. But I’m interested in your take on that.
Navin Budhiraja 06:40
That’s definitely true. And we have seen that. I think, actually, I don’t know whether you read, there was a model that was released, that with just a few images, it can actually create a video of you doing something that you obviously never did. And we have seen that model, somebody exists that I can take samples of your voice and actually create something completely unique that you never said. So now you have video with a few pictures, I’m sure they are there on the internet samples of your audio, and guess what? You have deep fake plus bots. So I think that is a huge challenge. And kind of goes back to what I said about open source; the ability for pretty much anybody with a laptop to be able to do this as very, very large scale, with the quality that most of us will not be able to tell the difference. But even outside that I think these models have become really, really good to actually create on we’ve talked about video and audio, but obviously text, as well, that can sound very authentic. So instead of just having somebody post a few messages–maybe they’re posted by real people–now you can have a bot that actually has a conversation, extended conversation for you in your own echo chambers. And that has created problems we know about that. I don’t want to mention it.
But yes, these technologies are there. And they are easy enough to use already. So without the right kind of guardrails in place, these are some of the questions that are still very open that yeah, they can do the things that Vishal talked about. This technology, actually, just by the nature of it, it doesn’t know when it makes things up, it doesn’t know when it lies, it can actually be jailbroken, even though you may have a really great model. You saw what happened with the release of some of the models are very recently that they can still be jailbroken. Right. So I think those are still very significant challenges that are still need to be addressed.
Camille Morhardt 08:45
How is is the role of the developer going to change? Is it a fundamental change that we’re going to see now that we’re dealing with AI and I know AI itself writes a lot of code pretty competently, based on instructions in English.
Navin Budhiraja 09:02
I’ve been a developer for a very long time. And it just amazed me when I first saw ChatGPT when it came out, and the quality of code that it started writing. And things have gotten significantly better. So yes, they’re going to be a change. But I think of it as actually two parts. If you go back to let’s say, take your original question. I think there’s a second part to it as well that I can talk about. The role of the developer, no question is going to change. But what we have seen, actually within our company as well, the use of Copilot, there have been studies that have been released, that if you take some of your best developers, we have not seen great improvement in the quality of the code that they write. What we do see definitely is the productivity going up because they still do a lot of pretty boilerplate stuff and helps them actually not do that, takes time, etc.
But if you look at somebody who’s not the best developer, who’s somewhat of an average developer, there has been actually an improvement in the quality of the code that they write. Right. So that’s definitely a plus. So one can imagine that people who are whatever reason are not as good as other– and this is what we talk about AI. Now, gen AI, in particular, has the ability to actually make all of us better. And I think we clearly seeing that in software development.
But if you can extend that second analogy a little bit, there are a lot of people who actually do work that could actually benefit from having a coding assistant sitting next to them. One of my daughters of two daughters actually is an economics major. And she needs to do a lot of data analysis. And that includes things like collecting data from public sources. In many cases, the data is not clean. And she spends a lot of tedious time actually trying to clean the data, which can actually do the analysis. And she’s not, again, she knows programming, but she’s not got a computer science class, she goes to economics and math classes. And imagine for somebody like that, if you could have a very high-quality assistant for cheap, who actually produces decent-quality code, I think that’s a big win. And that analogy can apply to anybody who any profession you like, be a finance professional, and be a marketing person. And if they have this thing available for much cheaper than what it would take for them to learn or for somebody else to do it for them, I think that’s a huge plus.
Camille Morhardt 11:44
It’s almost like you’re talking about the PC or smartphone and the future where everybody’s going to have sort of their appropriate level of AI assistant or helper. Can you maybe paint a picture for us? I know, Vianai specifically looks at helping enterprises adopt and or adapt artificial intelligence systems. Take whatever horizon you want to take, three years, five years, seven, something like that. Let’s take, like, Fortune 100. How are the Fortune 100 realistically going to be deploying AI?
Navin Budhiraja 12:22
You mentioned in the analogy about computers. I think, again, if I look at something like the internet, it went a huge way in democratizing information access. But if you go back further–and I think this is an analogy that somebody used–there’ll be many analogies for gen AI and the one that I like, actually, the best is something like invention of the steam engine, that in some sense, it made available tooling, and over time, other things that democratized human labor to some extent, where I could have a tool that helped me actually do things which I may not be able to do on my own with my hands and feet. So what AI definitely is doing gen AI is democratizing expertise, I believe, in the next three, five, seven years, I don’t know exactly when, but I think within that timeframe, across a very large number of industries.
We just talked about software development. But if you look at the legal profession, the medical profession, that doesn’t mean that it’s going to take away jobs, replace jobs; I think that is a much more nuanced conversation. But a lot of people who don’t necessarily have extra good legal advice, a lot of people don’t have access to the right medical expertise that they need. So I think that’s really the future that we want to build towards where from a personal standpoint, we have that access. And then the same applies to large companies in the sense that if you’re a company that you are in a service business, where you are providing legal help today, you’re providing research help, you’re providing help around any of these areas that I mentioned, I do think these companies are going to get significantly transformed.
You and I will expect that gen AI is part of the offering that they do in addition to the services the way they offered them today. So we have to support that transition for these companies. So the real hard part is figuring out how they get there. Because I think we talked about earlier, there’s this issue with talent. In many of these companies, given the large amount of excitement that’s out there, there is a sense of urgency in all of these large companies. And this is not common. I have not seen in my 30 plus year career the amount of urgency and interest in gen AI trying to do something that provides real value within pretty much every company that we talked to—and we’ve talked to hundreds of companies, if not more over the last year or so. So I think really, one has to figure out how we get past this talent issue, how we get past this idea that what’s where is the reality versus I would call it hype. And the hype includes not really understanding what the technology can do for a particular company–be the healthcare space, or the financial industry space, insurance space. And the way we do it, at least at Vianai is to deeply understand the domain, the user, and the use case or business process that they need. Right. So as an example, we are offering in the finance space. So you go to a finance professional with the CFO, or their other FPNA professionals, and they spend a lot of time doing financial planning, but they don’t have access to a lot of data because it’s spread across multiple systems in the enterprise. And gen AI is actually for the first time I see is a technology that can go across these different systems, the different data formats, the different user interfaces that they have and then to gather data in a way that most people can actually easily consume.
Camille Morhardt 16:17
If the last race, which I think has not ended, is collecting large quantities of data, because so far, it seems, we don’t have a great alternative to build accurate models other than providing them with a lot of data. And that might be really interesting transformation if that changes. But for now, is the next race going to be creating models that are either general or vertical specific–I’ll just say vertical as in maybe it’s financial services or point of sale or something like that agriculture–but models that can then be customized or adapted by let’s say, the customer? Or even software vendors that are taking, maybe purchasing or purchasing access to some model that’s kind of got you 80% there, and they’re doing the last mile to bring it into like an enterprise? Or do you think it’s going to evolve where every enterprise is using their own data, their other data that they’ve captured, to build their own models? Do we know which way it’s gonna fall out?
Navin Budhiraja 17:28
We actually subscribe to the belief that there’s not going to be a few models, there’s not gonna be dozens of models, there’s going to be 1000s of models. Yes, there will be a few of these frontier models and from these companies like Open AI, Google, Anthropic and others. And we already seen that. But in addition to that, they will be open source models in whatever shape or form with some of the challenges that we discussed earlier. And then companies will take these models, they will use the right model–depending on the use case that they have; and in pretty much every use case that we run into, it’s never the case that a single model is the only model that is used. There’s typically an orchestration involved. And I see a world where, at least for the enterprises, where there is extreme need for accuracy, and privacy and security. And these things are hard problems for the general purpose models, just because of the size of those models, the kind of data that they have been trained on. A lot of the challenges that we see around toxicity and hallucination starts with the data that these models are trained on. Right. So if I can actually limit the use of those models, in cases where those things you can actually work with, and then have additional models, these are typically going to be much smaller models. And they have none or less of those challenges that I mentioned earlier. And they are both possibilities. They are where we start with a smaller open source model. And then you fine tune them on the company’s proprietary data. So you actually have a model that works really, really well for the use case. And there’s going to be some companies I think, will have somewhat less of that, where you build a model completely from scratch; I think is going to be settling down where you have a few large models, you’re gonna have a few models built completely from scratch, but the dominant gonna be models that have been trained small models, but then subsequently fine tune on the company’s proprietary data.
Camille Morhardt 19:43
You’re Chief Technology Officer at what seems to me to be a very interesting, very forward-thinking AI company, just out of curiosity, because I think there’s going to be a lot of companies cropping up that are offering to help small, medium, and large businesses adopt and adapt AI. So, what I guess, what questions would you recommend people who aren’t experts ask the CTO of a company when they’re evaluating whether they want to work with this company and helping them?
Navin Budhiraja 20:15
At least for the kinds of companies that you work with. And these are, as I said, typically larger companies across verticals like insurance, finance, manufacturing, they have certain challenges in adopting technology. And it’s not just about the scales, but many of these are in regulated industries as an example. They have a lot of different kinds of applications, which they have developed and been using over years and decades, which they have to continue using, at least for the foreseeable future. So within the scope of those types of companies, I think the question is for them to really understand the real value that gen AI can deliver to them at this time. People are clear that they just cannot wait for this technology to mature, they cannot wait for some of the questions that we have around that you asked around regulation of the source to fully settle. What is actually really needed is for them to get started. Yet, they might take a little bit of time to actually get some of these use cases live in production. But they want to use that journey to actually get to a point where they understand better what works for them and what doesn’t work for them. It’s important for them to figure it out whether a company like ours, or whoever is they talking to, how are they going to take them in on a journey like that.
And it starts obviously, having the right technical expertise, but also, I believe, having the right domain expertise in the company that needs to deliver that because as I said, in pretty much everything that we’re doing today, you don’t start with the technology. You start with the user problem–the problem that they have–you understand the business process they have, in many cases, they actually know where the challenges are in the current process. But I think they need help to identify where to start, given where the technology is today, a company like ours, or somebody else can help them that both on the domain side as well as the technology side, I think that gets you a long way.
Camille Morhardt 22:29
I want to pivot just to something a little bit more personal. I know you just got back from going to base camp at Everest, can you tell me a little bit about what that was like?
Navin Budhiraja 22:39
It’s hard to describe, either in words or in pictures, it just is such an amazing place. But the vastness of it all around you are these dozens of peaks, and most of them are like 7000 meters or plus. And it’s not just that, I think. So this is about a 10-day hike, where you kind of start from a place called Lukla. And then you go all the way to base camp, about 60 miles an hour, eight, nine-day timeframe, and you plan to climb about 10,000 feet. So this is a perfect case of the saying that the journey is really a big part of the destination. Because while the base camp itself is great, you actually cannot see the rest if you’re at the base camp because there is another 7000-meter peak in front of it.
But yeah, as you get there you can have the landscape dramatically changes when you’re below the tree line. It’s just there’s a diversity of what you see out there–fast flowing rivers, these tall high hanging bridges and the small villages. Each village is like five houses that you cannot see on the way and then then you cannot go above the treeline, it’s just rocks like the remnants of the glacier. So I would recommend anybody who is fit enough and you can walk for 10 hours, and most of us can do that with some practice. I will definitely recommend doing that.
Camille Morhardt 24:15
Well thank you Navin Budhiraja, very interesting, Chief Technology Officer of Vianai. I thank you so much for your time today. Appreciate it.
Navin Budhiraja 24:23
Thank you. Thanks for having me.
The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.