
FEMI OKE: Technology is embedded in every aspect of our daily lives, from the way our cities run to how we live at home and work.
LIZZIE O’LEARY: One of those major shifts includes the growing use of AI-powered agents.
FEMI: But is the world ready for it? And how can we trust what these agentic AI tools do?
LIZZIE: From PwC’s management publication, strategy and business, this is Take on Tomorrow. I’m Lizzie O’Leary, a podcaster and journalist…
FEMI: …and I’m Femi Oke, a broadcaster and journalist.
LIZZIE: Today, we have a very special episode for you. We’re on the ground at the annual World Economic Forum meeting in Davos, a gathering of all the biggest thought leaders and business leaders from around the world, where we discuss trust—and responsible AI in the age of agentic AI.
FEMI: So what exactly does that mean? To find out, let’s join our host for the day, PwC’s Sarah Von Fischer.
SARAH VON FISCHER: We’re here in Davos for a special episode of Take on Tomorrow. Today, we’re discussing all things agentic AI, including the importance, of course, of building trust in these systems. And to do that, today I’m joined by PwC’s Matt Wood, Global and US Commercial Technology and Innovation Officer. Matt, thanks so much for being on with us.
MATT WOOD: Thanks for having me. Appreciate it.
SARAH: So, Matt, we’re in Davos here, and you can’t get away from hearing about the theme of Davos, which is collaboration in the intelligent age. So tell me, what do you think this means to you when we’re thinking about this in the context of all things AI?
MATT: One of the really unique and interesting parts of Davos is, number one, we’re in the middle of nowhere. I think that’s very deliberate, because it really does allow you to, kind of, take a step back and think about where technology and society is going and what does life look like in a world that is uniquely enabled through an abundance of intelligence. How does that impact individuals, organizations, and society? And so, I probably came to Davos expecting there to be just a little bit of AI fatigue. That has not been the case at all. A lot of folks that are attending feel an increased level of urgency to deliver on the opportunity of artificial intelligence. And the benefits here are going to be things like delivering robust and new models of healthcare and wellness. How can we use artificial intelligence to deliver personalized care? How do we ignite new levels of human knowledge and creativity? How do we channel artificial intelligence into some of the hardest problems that we are facing from a scientific and engineering perspective? The models, as we’ll talk about, are getting better and better and better at the fundamentals of mathematics. And so, how do we channel that to drive new, open-ended science research? And ultimately, how do we use this to improve the world for all of us here? So that’s a lot of the conversations we’ve been having here at Davos.
SARAH: And something has come up that I know you’ve been working on too for a few years now, I believe, is agentic AI, AI agents. So, for those that might not know what that is, can you explain how that’s different than maybe GenAI? Is it different than GenAI? Can you talk us through that?
MATT: Sure. So you can kind of think of artificial intelligence. It’s, the first time you start playing around with it, you may have used Chat GPT, and you’ll use it a lot like a Google search. You ask a question, and you get an answer. Over time, you start using it more for back-and-forth questions. So, ask a question, get an answer, discuss the results. And that’s really useful for a lot of different things. I personally use it a lot for brainstorming. So, I’ve got an idea. The AI doesn’t care. I can send it all of my terrible and good ideas, and it can work with me to, kind of, improve them and find the ones that are good and find the ones that are bad. What agentic AI does is it takes it a step further. Same underlying technology, but instead of just asking questions or chatting, you’re giving the AI an objective. And so you say, “Hey, book me a flight to Davos.” And the AI takes a look at that, it understands, and it’ll create, basically, just like a human might, a to-do list. And it looks at that to-do list, and then it studiously works through its to-do list to meet its objective. Things like booking a flight, it’ll be very good at. But there’s other more open-ended problems that it can approach, because it can just keep running. So, it can go round and round and round, keep adjusting its strategy based on what it wants to achieve. So, for example, you could say, “Hey, I’ve got $100, turn it into $1,000,” and just let the AI agent do its thing and see what the results should be.
SARAH: It seems like there’s a world of possibilities, I guess, with agentic AI. So, what are some of the examples of what it could do for your business, perhaps, or maybe even, you know, society?
MATT: One area that agents are used today that’s probably the most compelling and where they have the most capabilities is in developing software applications. And what’s interesting about artificial intelligence is, today, AI agents, they make a really good entry level developer. So you can set a task for your AI developer agent. You can say, “Hey, I want to build a new software application for my team.” And the AI. will run around, and it will build your software application. It’ll put the UI [user interface] together. It’ll write all the code. It’ll deploy it into your internal systems. It’ll write the documentation and the tests. It will write all the integration pieces. It’ll write documentation. It will generate training material to help you use it and spread the word inside your organization. And so that’s a very, very different world for most organizations. And over the next 24, 36 months, those AI agents, they’re going to be about as good as a mid-level entry engineer, and they’re going to be about as good as a principal engineer after that. And so, the way that you actually use them amplifies the expertise of the software developers you already have. But it gives a very convenient shortcut to a lot of industries that are going through transformation to be able to speed up and leapfrog using software.
SARAH: And I think then the next part of the conversation—and we’re hearing this in Davos as well—is what about trust? What about trust in these systems as they’re, kind of, off and running? Do you think about trust a little bit differently with agentic AI? And how do you make sure that those principles of trust are ingrained in those systems?
MATT: You can spend an infinite amount of money and have an infinite amount of technology, and you can solve all of your problems. And if you are not also investing in the trust in that system with the people that are actually going to use it, you will see zero return on that investment. Because the people actually have to use the system to get the job done. You know, there’s a sense that AI is somehow coming for our jobs, or there’s going to be a big decrease in the workforce. And for sure, the work and how we do that work is going to change. But over time, AI makes you better at the thing that you do. It doesn’t replace your ability to think, it makes you a better thinker. It doesn’t replace your ability to write, it makes you a better writer. It doesn’t replace your ability to solve problems, it makes you a better problem solver. And so, think of that written large across all your organizations. That’s the opportunity ahead of us. And without trust in the systems, even if they can solve those problems and be great writers, you simply won’t get any return. And a big part of that trust is driven through just transparency into what the system does well. And a much better approach is to say, “Hey, we evaluated this in a deep way, and here’s what it does well, and here’s where you should use it, and here’s the results of the data that we saw, and here are some areas where it didn’t perform as well. Don’t use it there.” The other important part of building trust is to just explain what the system is doing as it’s doing it. And so a lot of more modern AI systems can actually verify their approach as they’re going through. So they can say, “Hey, I just generated this result for you. But when I looked at it and checked it against the policy, like, it didn’t match. So let me go ahead and try again.” And exposing that to the user can be very useful, because you’re, like, OK, well, the AI made a mistake, but it caught it and it’s correcting it. And so, augmenting what we think of as large language models today and artificial intelligence with other intelligence style systems, including things like automated reasoning that give you this ability to be able to inspect the workings is a really fruitful area, which I think will build trust over time.
SARAH: And when you’re having conversations with clients or here in Davos, what is your advice, I guess, to them to make sure that these are implemented responsibly, these systems?
MATT: Responsibility—the risk is you think of it in quite a narrow way, specifically around the ethical use. This is an important element of responsibility, but it’s just one element of responsibility. So, as you’re rolling out a system, you need to understand, is this a reasonable use for this technology? Would society accept this use? Does it improve the world in a meaningful way? But responsibility is actually much, much broader than that. Responsibility starts, kind of, with the data that you’re using with the model. Do you have good governance around that data? Are you maintaining the privacy and security of that data? Are you being a good steward of that data? Do you have written policies, and do you have a good understanding? And do you have technical limits on who can use that data with what service? For what purpose? So that’s a really important part of responsibility. Another important part of responsibility is the models itself. Are they fit for purpose? Are you using the data and the models in such a way that you understand, as we were just saying, what they’re good at and what they’re not good at. That’s a really important part of responsible use. The way that you use the output, are you validating it? Is it being used for manual review? Is it completely automatic? Does that make sense? Given the risk of the problem that you’re trying to solve? That’s an important part of responsible use. So responsibility really is—it’s, like, everybody’s job.
SARAH: When we think about agentic AI, are we seeing businesses use this? And do you think if they’re not, is 2025 the year to really make sure you’re incorporating them?
MATT: Yeah, for sure, people are using these capabilities today, particularly in software engineering. I honestly think that if you’re building software today in any context, if you’re not using agentic systems, you’re at a massive competitive disadvantage. These systems are so capable, and they are so ubiquitous. They are so easy to set up and run and install, and they are so impactful. You can get an immediate 25, 35, 45% increase in the type of work that you are doing—in the capability, the outputs are usually more secure than you can do with humans alone, other humans find the code more readable. So, there’s lots of advantages. So, if you’re building software today and you’re not using these systems, you are already at a disadvantage to your nearest competitor. I think that’s one area. We’re also seeing a lot of use with our clients in things like contact centers. So, being able to automatically resolve incoming questions, resolve them with agents. Not only are they cheaper, not only are they faster, but if you look at the results, the actual CSAT [customer satisfaction] scores are higher from your customers. So they prefer—because they’re spending less time waiting, because the results themselves are more likely to answer their question, the result is that they are more satisfied through their agentic systems. So, just like with software development, you get to have your cake and eat it too, with agents in contact centers.
SARAH: And I guess, that is, we’re looking ahead, you know, to 2025 here, what else is on your mind for things to come with AI?
MATT: Yeah, it’s a good question. You know, this is a technology that’s moving very, very quickly. Technology capability over time tends to follow an S curve. And it pootles along, you get incremental improvements until you get some inciting event, which causes an exponential increase in capability. And then, over time, the top of the S curve tails off and you’re left with diminishing returns of what you’ve got is pretty much what you’re going to have to work with. The challenge, of course, is that you never know where you’re at on the S curve because you’re going to look backwards down it. Most people, I think, would say that we’re in the high gradient part of AI today. We’ve got new approaches and methods and models appearing from industry and academia seemingly every single week. But for my money, I think we’re in the bottom left-hand corner of this curve. I don’t think we’ve hit the inflection point yet. There’s a chance that we will come across that inflection point this year. If it’s not this year, I guarantee it’ll be next year. Agents are one of those really important technologies which are going to drive compounding increases in capability. So that, you might say, is a good indicator that you’re going to hit that hockey stick. But there are others. The one I’m most excited about is the advent of what they call reasoning or thinking models. And so, these are models which aren’t just answering questions with a kind of stream of consciousness. Instead, you give them time to think, and as a result they can solve more complex problems, just like humans. So, as we’re sitting talking today, we have a general sense of the points that we want to make. But the words as they come out of our mouths, it’s kind of stream of consciousness. You know, I’m not planning my sentences ten ahead. The words are coming out as I’m thinking, and that’s kind of how traditionally these AI systems have worked. You can even see the words streaming in as they’re typing back to you. With thinking models, you basically instruct the model to just stop talking and start thinking, and you give it as much time as it needs to be able to think through the problem. Over time, you can show that the problems and the capabilities increase exponentially with the time given for thinking. So the longer you let them think, the more exponentially difficult the problems are that they can solve. And so this completely changes how you would design an intelligent system. And to give you a sense of, kind of, where we’re at, the area that they are most successful in is really around solving complex mathematical problems. To the extent that we didn’t have sufficiently complicated mathematical problems to test the models with…
SARAH: Oh, wow!
MATT: …they were already solving what mathematical Olympians would solve in, kind of, 24 hours. The models were already capable of solving those, you know, this year. And so, we needed to go off and form harder problems and form harder mathematical benchmarks to test the AI at. These are the sorts of things that would take a professional Olympic mathlete days to solve. So it’s pretty exciting.
SARAH: Yeah, a lot of excitement on the horizon for sure. Well, Matt, thank you so much for joining us here. We really appreciate your insights.
MATT: Thank you so much. Appreciate it.
LIZZIE: That’s all for today. Join us next time as we head back to Davos and dig into what’s on the minds of thousands of CEOs around the world—in PwC’s Annual Global CEO Survey.
FEMI: Take on Tomorrow is brought to you by PwC’s strategy and business. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity.