
New Challenges in the AI Agent Era
Update: 2024-05-09
Share
Description
What are the ethical implications of AI agents that can act autonomously, learn from their interactions and influence human behavior? In this week's episode of "Waking Up With AI," Katherine Forrest and Anna Gressel dive deeper into this complex topic and explain how it applies to real-world scenarios and challenges that businesses may face with this emerging technology.
##
Learn More About Paul, Weiss’s Artificial Intelligence Practice:
https://www.paulweiss.com/practices/litigation/artificial-intelligence
Comments
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
In Channel
00:00
00:00
1.0x
0.5x
0.8x
1.0x
1.25x
1.5x
2.0x
3.0x
Sleep Timer
Off
End of Episode
5 Minutes
10 Minutes
15 Minutes
30 Minutes
45 Minutes
60 Minutes
120 Minutes


Transcript
00:00:00
Welcome to Waking Up with AI, a podcast from Paul Weiss, featuring your hosts Catherine Forrest and Anna Gressel, the only update you need on AI policy,
00:00:10
law and governance starts now.
00:00:15
Good morning folks and welcome to another episode of Waking Up with AI, a Paul Weiss podcast.
00:00:21
I'm Catherine Farrest.
00:00:22
I'm Anna Gressel.
00:00:23
So Anna, I'm actually sitting in California right now in a hotel room and I am actually having all kinds of thoughts about AI agents.
00:00:31
I just can't let it alone.
00:00:33
I wish I could say I was surprised, Catherine, that's why you're doing in California, I love it.
00:00:38
Well, it's not why I came to California, but it is part of what I'm thinking about while I'm in California.
00:00:44
You know, I'm really obsessed about AI agents.
00:00:45
No, I know.
00:00:46
It's your favorite topic these days.
00:00:48
Yeah.
00:00:49
Actually, I think it might be.
00:00:50
But even though we talked about them in our last episode, there's just so much more to say that I thought maybe we could spend a little bit more time on it today.
00:00:57
Definitely.
00:00:58
No, no more.
00:00:59
Let's do it.
00:01:00
Okay.
00:01:01
And so, since our last episode, even just a short time ago, we've continued to see this really massive explosion on Worken discussion around AI agents.
00:01:09
It's true.
00:01:10
There are so many articles talking now about significant commercial investments by companies in AI agents and assistants with what they call "agentic capabilities."
00:01:18
And there's this paper from Google DeepMind, which is called, quote, "the ethics of advanced AI assistants."
00:01:25
And, quote, "I really do want to recommend that folks take a look at it if they haven't already."
00:01:30
Yeah.
00:01:31
I mean, I think we're calling it a paper.
00:01:32
You could probably call it a book.
00:01:33
It's 274 pages, so a real work, really amazing work, actually.
00:01:38
And maybe Catherine, we can give our audience a few of the highlights in case they don't have time to dig in for themselves right away.
00:01:44
Right.
00:01:45
It is a lot to unpack.
00:01:46
And that just makes it a lot more fun for us to do some of that unpacking.
00:01:51
And so we'll need both coffee and maybe a little bit of water, too, since I'm in California and dehydrated.
00:01:56
I've got my coffee.
00:01:58
But for folks who want a primer on agents, just don't forget, you can go back and listen to episode six of "Weaking Up with AI," where we talk about it.
00:02:04
All right.
00:02:05
So, one thing I wanted to pick up on and add to from our last episode is that when we're talking about AI agents, we're talking about AI that's trained to accomplish tasks autonomously and that's what we talked about in our last episode.
00:02:18
And they can be given a whole series of things to do and really essentially take over certain functions of your computer or really anything that's digitally available to them through your computer.
00:02:32
And if they run into roadblocks, they can flexibly problem solve.
00:02:36
And one thing that I wanted to emphasize that we really hadn't spent much time on is that these AI agents, we shouldn't think of them as lone wolves.
00:02:46
They're not working alone necessarily.
00:02:48
They can, but they can also work in groups.
00:02:51
You mean like a swarm of bees?
00:02:53
Well, yeah, but not quite in a negative way of a swarm.
00:02:57
They can organize themselves into groups like drone swarms or swarms of bees, but they can do that in a way that's assisting each other positively to accomplish a task that might be too big for one or have too many parts that need to operate simultaneously for any one of them to carry out alone.
00:03:16
Yeah, and I think, you know, when we think about them working in groups, it's important to know that sometimes there's an organizing center so the AI agents can be centrally supervised.
00:03:25
And they can be supervised by AI or by humans, but it doesn't have to be by humans.
00:03:30
They can be actually supervised by AI.
00:03:32
And I find it incredibly interesting that one AI can supervise not only another AI, but actually a group of AI, which adds another capability to this AI tool set of learning to engage in cooperative behavior.
00:03:48
So Catherine, how do you see that impacting things on the technical friend or the legal friend?
00:03:53
Well, AI working together to accomplish a goal that's aligned with what humans want.
00:03:59
That's a positive thing.
00:04:00
That's a good thing.
00:04:01
But AI working together in some way that's not aligned with what humans want could be potentially problematic.
00:04:09
And just a terminology point in the AI area, when we talk about AI and humans having the same goals, we tend to use the phrase AI alignment.
00:04:17
So Catherine, by non-aligned, you really mean that AI could be engaged for task that might be for a malicious end.
00:04:23
Right.
00:04:24
And the key here for me is that AI agents need to be aligned for positive purposes, human aligned positive purposes, not malicious ones.
00:04:34
So you don't want to have a series of bots, for instance, that are spreading misinformation around.
00:04:39
And AI agents will also challenge us as a result of that to make sure that we've all got the right security protocols in place around control permissions for these agents.
00:04:52
Oh, definitely.
00:04:53
And Catherine, the deep mind paper also talks about different kinds of practical ways that AI agents are going to start having an impact on our lives.
00:05:00
Do you want to talk about that for a moment?
00:05:01
We're going to see that impact in two ways at a personal level pretty soon.
00:05:06
There's going to be this increased adoption of using AI to help with daily life tasks, sort of a turbocharged, as I've always said, sort of turbocharged AI assistant like a Siri or an Alexa,
00:05:18
such as performing a number of tasks on a to do list, not just a single task, but a whole bunch of tasks.
00:05:27
And that's at least one of the ways we'll be able to see it in our personal lives.
00:05:32
Catherine, at some point, do you think we're going to be able to ask them for advice like life advice?
00:05:36
Well, they actually are being trained for it, but I'm not sure if I'm going to be taking at least any significant life advice from an AI tool when I still have my bestie for that.
00:05:46
But they're talking about using AI agents, or they sometimes call them assistants to engage in interactions between consumers and companies.
00:05:55
For instance, if you are somebody who has to, from time to time, like all of us, spend time on a customer service line trying to get something taken care of,
00:06:06
the AI assistant will be able to do that.
00:06:08
It'll be trained to be able to be flexibly responsible and responsive.
00:06:13
So imagine being able to actually offload that onto a tool.
00:06:16
That'd be fantastic.
00:06:17
I mean, as these start taking off, I think it's going to be such an interesting time.
00:06:21
And the deep mind authors call it the beginning of the AI agent era.
00:06:25
And Anna, what's at stake for companies that might just now be seeing this?
00:06:30
Why should our listeners care?
00:06:31
I think we should go into that a little bit.
00:06:33
It's a great question.
00:06:34
There are a few important things to say here.
00:06:36
The first is that AI agents are going to start being sold to and deployed within companies very quickly.
00:06:42
We're already hearing the companies and people want to see agents and AI do more for them.
00:06:48
That's where agents come in.
00:06:49
Agents are actually going to be able to carry out complete tasks, not just answer questions or draft portions of a contract.
00:06:55
And that's going to create challenges and test actually some existing compliance frameworks for AI agents when they are essentially being delegated more responsibility and potentially even some one might characterize with responsibility as power within organizations.
00:07:13
So there'll be some really interesting compliance questions that are going to come up.
00:07:16
Yeah, definitely.
00:07:17
I think two questions that folks listening to the podcast should keep in mind are first, how do humans maintain an appropriate level of control over AI agents?
00:07:26
And what does that even look like in a world of human machine collaboration and human machine collaboration for folks?
00:07:33
That's a whole other topic we'll get into in future episodes.
00:07:36
But second, how do you know that AI is actually doing what you want?
00:07:39
That's a question that has implications for almost every one of us.
00:07:43
It's one of the issues that the deep mind authors address as well.
00:07:46
And one of the issues that we've seen come up is included in an interesting study, a really interesting study that's called, "large language models can strategically deceive their users when put under pressure."
00:08:00
I really recommend that as another piece that folks take a look at.
00:08:05
Yeah, I mean, that paper for folks who are thinking, you know, should I read it, definitely, it describes a really interesting test scenario in which an AI agent was tasked to trade stock.
00:08:16
And that agent was tasked to know that it couldn't trade on insider information.
00:08:20
That was wrong, right?
00:08:21
That was an instruction it had.
00:08:22
And it was put under pressure of various kinds.
00:08:24
So the researchers told the AI agent, the company that worked for a needed cash, and eventually gave the agents some insider information while at the same time having some other trades the agent made go poorly,
00:08:35
and then the agents manager gave it a bad performance review.
00:08:39
So that just layered the pressure on.
00:08:41
What did the AI agent do?
00:08:42
It eventually decided to act on insider information to score a victory, but it actually went one step further.
00:08:49
In its explanation of the trades it made, it actually concealed the fact that it intentionally acted on insider information, and sometimes it even doubled down on that denial.
00:08:59
Right.
00:09:00
I mean, this is really fascinating because this is going against the training and the instruction set that the AI agent was given.
00:09:07
So the implication is that a company could end up having an agent that goes a little rogue and then can actually engage in uninstructive,
00:09:17
deceptive acts to sort of cover that up.
00:09:21
And so that's why I think it's important that we all need to spend time thinking about how to best protect against these kinds of issues and practice to really do some significant red teaming and to do that at the early stage.
00:09:35
Yeah, I mean, it has implications in so many areas, not just insider trading.
00:09:40
Really almost any domain where an agent can make a decision about how to execute a plan that could create serious risk for a company.
00:09:47
So if an agent were able to take action that violated a company policy, for example, or even civil or criminal laws, that would raise really interesting and potentially very challenging questions about who to hold responsible.
00:09:58
It's also an important issue for regulators, along the lines of deep fakes in frontier models, which is going to be a topic we're covering very soon.
00:10:05
This is a new type of technology and we're going to see regulators try to figure out how to regulate it and whether new paradigms are needed.
00:10:13
So it's a really important conversation that's happening right now.
00:10:16
So I think our practical tip for today is when you start to see AI agents being talked about and deployed within your company, make sure you've got the right red teaming in place that you've got the right compliance policies in place and keep your eye out for further discussions about additional risks and benefits for AI agents.
00:10:34
I'm sure we're going to continue to talk about them throughout 2024.
00:10:38
And Anna, that's it for this week's episode of Waking Up With AI, a Paul Weiss podcast.
00:10:42
I'm Catherine Farrest.
00:10:44
I'm Anna Gressel.
00:10:45
Have a great week.
00:10:46
Thanks for listening to Waking Up With AI.
00:10:49
Be sure to subscribe and your favorite podcast app to stay up to date on the latest in AI policy, law, and governance.
00:10:55
For more information on Paul Weiss, go to our website at www.PaulWeiss.com.
00:11:01
[MUSIC]
00:11:07