Skip to content
InTechnology Podcast

Makers, Shapers, and Takers: McKinsey Sr. Partner on the Future of GenAI (209)

In this episode of InTechnology, Camille gets into generative AI with Lareina Yee, Senior Partner at McKinsey & Company. The conversation covers the rapid emergence of gen AI, enterprise adoption, and Lareina’s optimistic outlook on the future.

To find the transcription of this podcast, scroll to the bottom of the page.

To find more episodes of InTechnology, visit our homepage. To read more about cybersecurity, sustainability, and technology topics, visit our blog.

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Follow our host Camille @morhardt.

Learn more about Intel Cybersecurity and the Intel Compute Life Cycle (CLA).

The Rapid Emergence of Generative AI

Lareina compares the fast adoption of generative AI to other rapidly adopted technologies like social media, yet emphasizes how much faster ChatGPT’s usage surpassed even the top social media platforms’ benchmarks within its first days. She says what makes gen AI so remarkable is its accessibility thanks to natural language processing and how consumer applications of models have been distributed for free or for small monthly fees. Now, it’s very easy for anyone to become what Lareina calls a power user. This incredibly fast emergence and adoption of gen AI has led many to believe we’re on the cusp of a major technical transition, particularly in the workplace. She notes how this is the first time a significant number of knowledge workers’ activities could be augmented with AI. However, this doesn’t mean technology will be taking away entire jobs. Lareina believes that while some tasks could be replaced by generative AI, humans are still a necessary part of the process. There will more likely be a gradual change in our relationship with machines. She also illustrates how gen AI is not always the best tool depending on what needs to be accomplished, but it can certainly help as one tool of many.

Enterprise Adoption of Gen AI

Camille then asks Lareina how she’s seeing enterprises adopting generative AI. She explains how most are still in the early stages of awareness and deploying gen AI. There are three use case categories she foresees for future enterprise-level gen AI adoption: Maker, Shaper, and Taker. Makers will develop their own large language models (LLMs), typically if their business is in data or centered around building models. Shapers will use a base model and combine it with their proprietary data to create custom applications for their business needs. Finally, Takers will utilize off-the-shelf ready models or gen AI features built into existing workflow software and leverage those features for value in their business.

Above all, Lareina stresses the importance of having a clear purpose for implementing new technology like generative AI. She adds how companies seem to have the most successful gen AI adoption when they choose two or three specific use cases and deeply invest in them to then deploy at scale. One example she gives is the reported positive outcomes of using AI support in a call center. The overall message is that gen AI has the potential to completely change how work is done. At the same time, Lareina highlights how crucial it is to have a systemic strategy, prioritize the best use cases of value, and consider any risks before jumping in.

Optimistic Outlook on the Future

The conversation wraps up by touching on Lareina’s optimistic outlook on the future of gen AI based on the analysis and research she’s done so far. She says it’s certainly not too late for companies to start learning more gen AI and how they can leverage it. At the same time, she doesn’t shy away from how leaders need to be thoughtful in deploying AI, considering safeguards and risks at both the enterprise and societal levels. Lareina also comments on the fun side of playing with gen AI models as a consumer. On a personal level, she gives examples of how she uses generative AI in her work, and she shares how she teaches her kids how to use AI tools responsibly.

Lareina Yee, Senior Partner at McKinsey & Company

Lareina Yee generative AI genAI LLMs

Lareina has been a Senior Partner at McKinsey since 2000. She chairs McKinsey’s Global Technology Council and previously served as the first Chief Inclusion, Equity, and Diversity Officer. Lareina is also co-founder of Women in the Workplace, a research partnership between McKinsey and LeanIn.org, and she also contributed to McKinsey’s Race in the Workplace research study. Outside of McKinsey, Lareina is a board member for Safe & Sound and the San Francisco Ballet. She holds a Master’s in International Economics from Columbia University’s School of International and Public Affairs.

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

Lareina Yee  00:12

If Generative AI, generously, could do 60% of what I do, there’s still a 30% in order to complete the job that is me the human; it’s not a single replace. And so our relationship with machines may change.

Camille Morhardt  00:29

Hi, and welcome to today’s episode of InTechnology. I’m your host, Camille Morhardt. Today I have with me Lareina Yee, who is Senior Partner at McKinsey. She is also chair of the McKinsey Technology Council. She’s an expert on Generative AI. And she also served as the first Chief Inclusion, Equity and Diversity Officer at McKinsey. Very happy to have you on the podcast to talk all things about Generative AI and the future of work as it may be, if it still exists. (laughs)

Lareina Yee  01:04

It definitely exists. And thank you so much for inviting me to join you.

Camille Morhardt  01:08

Yeah, so I mean, I just have a phenomenal number of questions. You wrote a report about a year ago that kind of made some predictions for Generative AI, back when not so many people had really dived in and used it. And now I think it seems like almost everybody has, at least in tech circles. And my son just wrote a poem with it the other day; my dad in his 80s is calling me and telling me the differences between the models and whatnot. So it seems like almost everybody can gain access to it. Can you just tell us a little bit about why that is before we kind of dive into a lot more technical nuances?

Lareina Yee  01:48

Absolutely. So one of the things is it has been the fastest adopted technology we’ve seen in multiple decades,–so much more than Facebook, Whatsapp, Tik Tok, whatever types of benchmarks you might have used for things that consumers adopted quickly–Chat GPT kind of hit it out of the park within days. So the question is, why is that? And even 18 months later, how is it that your father, your kids, your neighbors are all telling you about their experiences and using it in very substantive ways?

I think one of the most remarkable things about this technology is that it is truly accessible for everybody of any age, any language, to be a power user. Because essentially using the technology– and we can talk about creating it separately–but using it means that you have a core skill around asking questions. And that is something everybody can do. Because of the natural language processing the ability to do that in any language–French, Spanish, English, Hindi–any accent, which we can talk about some of the use cases later and also, obviously, programming languages. And the fact also is right now, a consumer application of this has been distributed for free–or you know, there’s a small monthly fee for some of the more advanced models. That is just extraordinary. And I think the on-ramping, so to speak, takes two or three questions. And they can be things like, “help me plan a trip to New York,” “write a poem,” “draw a picture.” I mean, they’re just, you can become a power user very quickly. So I think that’s part of what has been so spectacular.

Camille Morhardt  03:36

So another thing that you mentioned in your report that’s a bit unique is that this is really–I don’t know if you phrased it exactly like this–it’s the first kind of thing in AI that’s really targeting as a tool or as a replacement–you know, we can discuss that later–but for knowledge workers, as opposed to I think we’ve all you know, for many years, been thinking about and arguing over and making decisions and policy decisions around manufacturing or assembly line, or jobs that could be automated and then with the use of machines or AI. Now we’re talking it’s entered a different realm also.

Lareina Yee  04:17

Absolutely. So there’s a lot of different ways people have framed this–the new cognitive revolution, the fourth industrial revolution. But if I were to simplify that out, if we go all the way back to the steam engine, or if we go back to the Model T/Henry Ford, there have been moments in history, where technology has allowed us to change how we work, and it has addressed parts of labor, and that has led to significant amount of productivity and growth in the economy. And this is one of those technology moments.

In our history, if we look at the Industrial Revolution, if we look at the types of technologies that we’ve been working with the digital technologies that we’ve been looking at over the last 20 years, all of those have addressed transactional labor, or actually production labor–like, you know, farming, physical labor. This is a technology that addresses tacit knowledge, knowledge worker type things and knowledge worker’s fancy for you are using judgment, you’re using information to put that together in service of your job. And so those can be manager jobs, it can be service oriented jobs, they can be lawyers, physicians, consultants. And so it’s a portion of the economy where certainly we’ve seen technology support knowledge workers, but this is the first one where you see a significant number of their activities could be worked could be exchanged could be augmented with machines.

It’s also a technology that is not taking away whole jobs. And this is probably something we’ve talked about, which is the way that we did the research is we’re looking at what activities in its largest possibility Generative AI could support or replace, but that doesn’t mean replacing a job. So if Generative AI generously could do 60% of what I do each day, there’s still a 30%, in order to complete the job, that is me, the human. And I think there’s a fascinating debate about actually how this is a tool, very different from other types of automation technology, where at its best, it’s actually something that is working with humans together. And that’s actually also very different. It’s not a single replace, or a single shift. It’s actually thinking about work differently with machines. And so our relationship with machines may change.

Camille Morhardt  06:50

I have many more questions right there, but I’m gonna break and just go back to the basics for just a second of how it operates. So how does Gen AI actually work? And what can it do that’s new and different from other AI that exists already?

Lareina Yee  07:08

AI has been in the works for the last 40 years; AI is not new. But what’s new is, prior to this, it was something that we thought of as analytical AI. And so for every piece of information, you would have to tag it; it’s very structured. And then when we put it in a structure, we’re able to do amazing things with data. The really cool thing is that these large language models are able to take structured and unstructured data, which means that it is able to take everything in the internet as its described–things on Reddit things on Google search–and it’s able to very quickly process that information. So that in and of itself is like the first thing that’s pretty cool. It’s able to get to unstructured data and information and images, and things that were very difficult for us to be able to organize and put into a model. So that’s the first thing.

The second thing is what it’s doing is it’s anticipating what is the very next, and that’s the training that it goes through. So if I have a phrase “it’s raining cats and….”  We all know that it’s the phrases, “it’s raining cats and dogs.” And what the large language models have been trained on is over time, they’ve learned that it’s not raining cats and cats, it’s not raining cats and birds, it’s raining cats and dogs, and they know that dog is the most likely next word. So it’s not saying that I predict that the phrase is it’s raining cats and dogs with a 99% probability; it’s saying that knowing the sequence of the words, and being trained on what to expect that it’s likely that the next word will be “dog.” It also can make some mistakes. And we’ve seen a lot of cases where it can you know, the word would be hallucinate. But that’s the kind of the core model piece of it.

And then what we talked about earlier is how we as humans interface with it, and we give it prompts or questions. And you can have a dialogue you can say, “Help me take these words and write a poem about this.” “Now, let me see what that poem would be like 50% shorter.” “Let me see if I wrote that poem in Shakespeare English, what that would be.” “Let me see what that would be like if it were a haiku.” And the reason that you’d want to do this is maybe, as the person you know, as the high school student, you want to play with different ways to express an idea. And the idea that you could have technology help you with that and prototype things and make things shorter and longer–but by the way, in all of those cases, you’re still in the driver’s seat, because you’re asking the questions, and you’re deciding what you want to push forward, as the thing that you share as your poem of choice.

Camille Morhardt  9:57

So to push on that just a little bit, because it’s pulling from known or historically written information to know that it’s “cats and dogs;” like you said, it’s not predicting that it knows that that is most of the time, what comes next in a sentence. So it’s then delivering that to you. How does it fare in unknown situations? Like we’re developing something new–and I don’t just mean a poem, where, you know, there isn’t really such thing as accuracy, I mean, there is in terms of following the format of a haiku– but I mean, like, what if you’re trying to look at a weather pattern, or you’re doing drug development or something and it’s like, looking for something that it doesn’t have a next word to?

Lareina Yee  10:42

Yeah. So in those cases, it depends, you know, in drug discovery, because that’s one of the hero use cases you hear a lot about, I think you have to kind of break it down a little bit further; it depends what you’re trying to do. There may be cases where analytical AI is absolutely the thing you need. And so, I think it’s important to know Gen AI is not the answer to all questions, it just opens the aperture of what we can do. So, in drug discovery, if you take that example, one of the things in early stages of research is that you would need to take a look at tons and tons of global research. Let’s say you want to canvass 500 academic articles and you want to structure that process and see what some of the key points are, and test different ideas, Generative AI can take that known information and the idea is you could take those 500 research papers, and start to parse through that very quickly. And still, you’re in the driver’s seat to decide, “okay, now, if these were the most salient points, how do I want to do that?” But the idea that you would have a 24/7 research team that’s helping you, you may not have as a human been able to get to that many papers– I’ll make it up, maybe you were only—

Camille Morhardt  11:54

In all different languages, too.

Lareina Yee  11:56

Exactly, in all different languages at the speed of an hour. And so, in that moment, Generative AI would be the right tool. There may be other things across that process of developing and testing drug solutions for disease, that you may need analytical AI, or other types of technology altogether, like quantum computing, for the processing. But I think if we step back some of the types of ways you can use generative AI make things happen in a much faster at the scale that we haven’t had the ability to do. But it’s not the only tool that you would use. If you want to say my question is “how do I cut years off of an average 10-year cycle to you know, to have FDA regulated and approved drugs available for major diseases?” If I want to take years off, I’d probably use multiple technologies and this might be one it stages, that helps a lot.

Camille Morhardt  12:55

So I should ask you how enterprises are adopting it? And like, what would be your advice for let’s just say, a Fortune 100 company? Should they be allowing people to use any model that they find out there to feed data into? What are some of those concerns? Should they be adapting a model and then restricting it to their own information?

Lareina Yee  13:20

Yeah. So we’re at the very early innings of enterprise adoption. I think what we saw over the last year was just a lightning rod of awareness. It was pretty exciting to see how curious and open to understanding and innovative the spirit was in understanding Generative AI. In deploying Generative AI. We all need to be incredibly judicious and thoughtful about how we do it. Maybe just to share there are a couple things. One is, we do think, over time, companies’ use cases will fall into three categories–Maker, Shaper and Taker. So, a Maker is someone who develops their own model, their own LLM, and oftentimes just the scale and the expense of doing that, that will often be prohibitive. But if your entire business is data or you are building these models, there are the cases where you create a model.

Then there is Shaper. And oftentimes what you’re doing there is you’re taking Chat GPT 4.0, or you’re taking Gemini, and you’re using that base of public knowledge in the enterprise, and you’re combining it with your own proprietary data and you’re creating different types of applications that help your business. I think that’s the one we’re gonna see a lot of energy around. Then the last one is Taker, which is that it’s almost like off-the-shelf ready like software. And that may be that you want some basic, even just sort of more advanced searching capabilities of external information, so you provide access to any of the large language models, like Claude to your workforce, or it may be that they’re software providers–and we’ve already seen a lot of this happen–that have incorporated generative AI capabilities into their workflow software. So think of Salesforce or Adobe, or Workday, you know, the list goes on; this is software that you’re already using, and generative AI capabilities have been built in already. And you want to take advantage of that.  You want a faster email, you want faster marketing campaign development; you would like to be able to look at your performance review data in a different way; you would like to be able to summarize a meeting, like if we’re videoing this, I would like summary notes, I’m going to send them back to you. These type of capabilities would be embedded in software and you would say, that’s a Taker model–I take that as is and I leverage that for value. So that’s a little bit of how we see the different cases.

But I think what’s really important to know is that traditional business principles never go out of style. And the first thing that’s really important is that we’re not going to deploy technology just for technology’s sake; we’re gonna deploy it for a business outcome. And so having a very clear business case and purpose to why you’re implementing the technology is a really important question. So it is because, you know, we would like to see a 20% productivity lift in our software development, because we think that if we do application modernization with Generative AI, and we think if we do QA testing, that’s going to make a huge difference. So I think like the businesses that are doing really well have a really clear view of what are they trying to achieve, and what’s the business outcome. That also means that over time, those projects will have the accountability.

And then the next thing that’s really critical is, instead of doing 100 flowers bloom, we are observing that the companies that are doing better are picking two or three use cases and just deeply investing them to deploy them at scale. And that’s really important, because it’s easy to pilot things, and to sort of show the shiny object, but what’s more valuable for companies is reaching in deeper so that you actually have an economic outcome. And that relates a lot to the report that we published last year, where we see the frontier of this being $4.4 trillion of economic potential for the global economy–that’s not this year. That’s in 10 or 20 years, it takes a while for economic productivity, get realize. But if I’m a business, I look at that $4.4 trillion of economic value and I say, “How do I start to capture a sliver of that today? How do I use this to modernize and innovate how I operate?”

Camille Morhardt  17:50

So one way to kind of go about this is to say, so pick an area, be it software development within your company, or let’s say marketing within your company, and you’re like, “Okay, let me give either the software developers or the marketers kind of access, and maybe it’s pre-trained or customized, to know sort of all publicly known vulnerabilities, so that automatically that’s already going through in the development process; or I’m going to let my marketing people, you know, do their first draft of any kind of comms they’re sending out using this thing. And you’re gonna see an improvement of I don’t know, between 3% and 60%, I guess, you know, depending on the person. You tell me?

Lareina Yee  18:35

Well, personally, I think we want to narrow that down a little bit. In cases like if you take call centers, for example, there’s been some great work by Stanford and also terrific work by our own teams that’s looked at the productivity uplift.  And I would say, we see in call center–so you know, when you call and you have a question and you talk to a person, you talk to an agent, and then what the AI solution is is providing that agent the scripts, the support to personalize response to that person or to have more contextual information process super quickly. So “ah, this is the sixth time this person is calling and this person is calling is a very loyal customer and this person had this problem last month.” So that type of solution for customer care.

We find a couple things in the research we’ve seen and others have seen as well as one, the call resolution time is faster, the customer satisfaction or NPS is higher. And the third one, which is really interesting is that the call center agents are happier, and embrace the technology. And the reason is is because they believe that with this, they can do their jobs better. They’re doing better each week. They’re getting coaching, they’re getting support from AI, and they’re getting through their calls, like their win rate on the calls or their success rates are much higher. And if you think about that, if I can do my job better, I would embrace it. Particularly we found lower tenured call center agents really liked it, because it helped them gain more expertise quickly. And so what might have taken them a couple of years, they were doing that over the course of a quarter.

You know, and that’s not going to be the case for all AI use cases. So I want to be realistic about it. But in this case, it’s an interesting example of how practical the solution can be. And by the way, in all these cases, this is humans working with machines versus humans being replaced by machines, which is a really important distinction of how companies are using the technology in this early stage and with the capabilities that are available now.

Camille Morhardt  20:43

Is this going to create more of like a fundamental or infrastructural level change in how work is done? Because what you’re talking about with a call center example is sort of like giving somebody in finance Excel. You know, now they have a spreadsheet, and they can do everything faster and better and run all kinds of– So it can be a huge boost for the individual. But is there anything that’s going to like–I’m trying to think of something like the Internet, where, yeah, it makes every individual smarter. But it also rearranged how supply chain logistics happen globally. You know, is Chat GPT kind of in that category? And how will it do that?

Lareina Yee  21:23

So I think it’s in that category that is yet to be seen, that is the work that we will see over the next couple of years. But does the technology have the potential? It does. And so you move from thinking about tasks, and how tasks and change the thinking about how my job as a call center agent, or my job as a manager, changes to thinking about how the system can change. So my interactions with you and other people, and then you start to say, “Wait, I could actually change the workflow or the business process for this.” That’s not all going to happen in one step. It’s going to take a couple of steps to get there. But I think that over time, we would be able to see some of that.

Camille Morhardt  22:08

Are companies, you know, giving like playgrounds for their employees and then looking at– are they analyzing what people are putting in and going, “Oh, it’s all our marketing people to all our coders like we should focus here or there?”

Lareina Yee  22:21

Companies are taking different approaches. For those who have deployed, like Chat GPT or something to their workforce; they may be looking at that, just as you said, like, “how is it being used? Let’s learn together.” Some companies are deploying specific solutions as opposed to just giving the tool out kind of all purpose. So it does depend.

It also depends on whether it’s a regulated or non-regulated industry. It’s actually very complicated if you’re regulated industry to just kind of give every employee access to the tools; you have to make sure that the data and the prompts and everything stay in your environment. So as you know, it’s complicated. That’s why I said it’s still systematic work, right, you still have to do all the steps to make sure that when you deploy these tools, it’s safe in the business environment for your users.

Camille Morhardt  23:05

You mentioned there’s a you think focusing efforts within an enterprise makes more sense than just kind of willy nilly. But do you have more advice or more specifics within that kind of a focus? Like even just to figure out what you should focus on is it just like an a guess at ROI before you deploy? Or is it–

Lareina Yee  23:22

I would do a systematic strategy and a business prioritization to get to the use cases of value. And I would make sure that your business leader and your CTO have signed up to build it together. And I think picking the right use cases is a strategic decision. So for example, you may not, given the risks of generative AI, you may not want to do something that’s customer facing, you may want to do something that’s an employee application first.  So there are things that are also not just about what’s the highest ROI, but it will be also your risk and your underlying infrastructure and your data. So I think that it varies, but I do think it’s better to do a couple things well than 100 things. This is just I don’t think this is a type of maturity of the technology that you just kind of throw it out there.

Camille Morhardt  24:15

You were kind of a leader in writing about this a year ago and doing a tons of analysis on it. What have you learned in the last year?

Lareina Yee  24:22

Well, the first thing I’ve learned is that I’m always learning, and we’re all learning together. So I learn as much from working with technologists as well as our clients. So first is we’re all in it together. If anyone feels a sense of FOMO, like fear of missing out, or you know, too late to the party, I would say don’t worry, you can still come in and frankly, learn off the backs of what we’ve all learned in the first year or so.

The other thing I’ve learned a lot is humility, that technology is incredibly powerful. We have to be very thoughtful as leaders in terms of how we deploy it, how we use it, how we safeguard it. As much as I’m an optimist, a pragmatist on technology, there is an underbelly of risk. And that can be at the enterprise level, that can also be at the societal level. So, there’s been some good global discussion on this, we need to continue to support that.

There are also societal questions in terms of access–you know, who has access to the technology? I mean, I’m so thrilled that some of the most amazing minds sit in Silicon Valley; that’s one part of the world we have to think about how it’s able to reach all corners of the world and all populations. There are a lot of really important questions that are human questions that have to come alongside and we need really smart and experienced people to be working on that.

The other thing I think of is fun. So you mentioned at the beginning your father, who’s 80, your high schooler, it sounded like, yourself, you’re all using it probably your colleagues. I think the technology, because it’s something that so many can use, it’s fun. It’s fun to create a poem. It’s fun to gain expertise. It’s pretty cool that we’re testing how the technology does against the ability to take standardized tests in the United States and to you know, What types of knowledge is it good at? What types is it not? And so I do think there is an aspect of playing with the technology as a consumer.

And the last thing I feel is very optimistic. Because there’s a lot of doors we don’t know will open with this technology. We’ve talked about it like the internet. Some people talk about it, like the iPhone moment, I had no idea with my first iPhone, how many applications there would be. How much, you know, you can think of that as positive or negative how much I rely on the device, and how many different things or just even simple human things, the fact that I can see when I’m traveling, I can see pictures of my kids. And we also had no idea about the surrounding economics. You know, you think about the cost of storage, it’s, we couldn’t afford any of that before. And so things that we may not see in our experience as consumers, so much of the ecosystem and the technology around that has also developed and has been more affordable, has become faster.

And so it’s hard to foresee all of those types of things that will happen. But I think, being at the first inning of it, it’s pretty exciting to be part of it. And I think it’s something that will be with our kids and our grandkids. And so we have a responsibility as business leaders and society leaders to be very thoughtful about it.

Camille Morhardt  27:58

How do you use it?

Lareina Yee  27:56

I use it at work. We were very early to develop a bottle. McKinsey is almost 100 years-old; we have all of our unstructured and structured information on a piece. And so I use it as a tool to help me problem solve and think.  I also use it on a personal level; I’ll play with all the different models on things like organizing things at home or preparing for a trip. And I also use it to play with some of the creative kinds of things. I’m not very capable as an artist. But it’s really cool to use the tools to say what kind of picture would this be? Or how would I express something differently?

Camille Morhardt  28:40

Yeah, and because you mentioned you’re a parent, is there anything that you would want your kids to not use it for?

Lareina Yee  28:48

For my kids who are all in school, academic integrity is the number one value. So I expect that they are using it as a tool, not as an answer. 100%. So those are our values. And I think there’s also learning at a kind of a finer tuned level, learning how to use tools responsibly. Just like you use the internet responsibly, I think there are important things about that that we will have to learn together. You know, there’s always a debate for parents about technology and how much and are there downside effects. At its simplest level, I do want my children to be literate in technology so that they understand how to use it. And that’s actually really important for them.

Camille Morhardt  29:36

Thank you, Lareina Yee.  Very interesting, thought provoking and personal, thank you for sharing how you think about generative AI at work and in your personal life and in the lives of your kids, too. I appreciate that.

Lareina Yee  29:48

Thank you for having me. It’s great to talk.

More From

Scalable, At-Home Diagnostics: How SiPhox Delivers with Silicon Photonics (214)

Mark Rostick Joe Moye Kevin Reid Beep Intel Capital autonomous transportation microtransit

Beep’s AI-Enabled Transportation Solutions: Connecting Communities and Extending Mobility Access to All (213)

What That Means: Net Positive Water (212)