DiscoverJustified PosteriorsWhat can we learn from AI exposure measures?
What can we learn from AI exposure measures?

What can we learn from AI exposure measures?

Update: 2025-07-28
Share

Description

In a Justified Posteriors first, hosts Seth Benzell and Andrey Fradkin sit down with economist Daniel Rock, assistant professor at Wharton and AI2050 Schmidt Science Fellow, to unpack his groundbreaking research on generative AI, productivity, exposure scores, and the future of work. Through a wide-ranging and insightful conversation, the trio examines how exposure to AI reshapes job tasks and why the difference between exposure and automation matters deeply.

Links to the referenced papers, as well as a lightly edited transcript of our conversation, with timestamps are below:

Timestamps:

[00:08 ] – Meet Daniel Rock[02:04 ] – Why AI? The MIT Catalyst Moment[04:27 ] – Breaking Down “GPTs are GPTs”[09:37 ] – How Exposed Are Our Jobs?[14:49 ] – What This Research Changes[16:41 ] – What Exposure Scores Can and Can’t Tell Us[20:10 ] – How LLMs Are Already Being Used[27:31 ] – Scissors, Wage Gaps & Task Polarization[38:22 ] – Specialization, Modularity & the New Tech Workplace[43:43 ] – The Productivity J-Curve[53:11 ] – Policy, Risk & Regulation[1:09:54 ] – Final Thoughts + Call to ActionShow Notes/Media Mentioned:

* “GPTs are GPTs” – Rock et al.’s paper

* https://arxiv.org/abs/2303.10130

* “The Future of Employment: How susceptible are jobs to computerization?” - Frey and Osborne (2013)

* https://www.oxfordmartin.ox.ac.uk/publications/the-future-of-employment

* “AI exposure predicts unemployment risk: A new approach to technology-driven job loss”— Morgan Frank's paper

* https://academic.oup.com/pnasnexus/article/4/4/pgaf107/8104152

* "Simple Macroeconomics of AI" – By Daron Acemoglu.

* https://economics.mit.edu/sites/default/files/2024-04/The%20Simple%20Macroeconomics%20of%20AI.pdf

* “The Dynamo and the Computer” – Paul A. David

* https://www.almendron.com/tribuna/wp-content/uploads/2018/03/the-dynamo-and-the-computer-an-historical-perspective-on-the-modern-productivity-paradox.pdf

* “Productivity J-Curve” – Erik Brynjolfsson and Chad Syverson

* https://www.nber.org/system/files/working_papers/w25148/w25148.pdf

* “Generative AI for Economic Research: Use Cases and Implications for Economists”– Anton Korinek’s paper

* https://www.newyorkfed.org/medialibrary/media/research/conference/2023/FinTech/400pm_Korinek_Paper_LLMs_final.pdf

* Kremer’s O-ring Theory

* https://fadep.org/wp-content/uploads/2024/03/D-63_THE_O-RING_THEORY.pdf

* 12 Monkeys (film) – Directed by Terry Gilliam

* Generative AI for Economic Research - Anton Korinek.

* https://www.aeaweb.org/content/file?id=21904

Transcript:

Andrey: Welcome to the Justified Posteriors Podcast, the podcast that updates its beliefs about the economics of AI and technology. I'm Seth Benzell, exposed to and exposing myself to the AI since 2015, coming to you from Chapman University in sunny southern California.

Andrey: I'm Andrey Fradkin, riding the J curve of productivity into infinity, coming to you from Cambridge, Massachusetts. Today, we're delighted to have a friend from the show, Daniel Rock, as our inaugural interview guest.

Daniel: Hey, guys.

Andrey: Daniel is an assistant professor of operations, information, and decisions at the Wharton School, University of Pennsylvania, and is also an AI 2050 Schmidt Science Fellow.

So he is considered one of the bright young minds in the AI world. And it's a real pleasure to get to talk to him about his work and spicy takes, if you will.

Daniel: Well, it's a pleasure to get to be here. I'm a big fan of what you guys are doing. If I had my intro, I'd say I've been enthusiastic about getting machines to do linear algebra for about a decade.

Andrey: Alright, let's get started with some questions. I think before—

Seth: Firstly, how do you pronounce the acronym? O-I-D (Note, OID is the operations, information, and decisions group at Wharton).

Daniel: This is a big debate between the students and the faculty. We always say O-I-D, and the students say OID.

Seth: So our very own. OID boy. All right, you can ask the serious question.

Andrey: Before we get into any of the specific papers, I think one of the things that distinguishes Daniel from many other academics in our circle is that he took AI very seriously as a subject of inquiry for social sciences very early, before almost anyone else. So, what led you to that? Like, why were you so ahead of everyone else?

Daniel: I'm not sure. Well, it's all relative, I suppose, but there's the very far back answer, which we can talk about later as we talk about the kind of labor and AI. And then, there is the sort of Core Catalyst Day. I kind of remember it. so back at the M-I-T-I-D-E, where we've all spent time and gotten to know each other in 2013,

Seth: What is the M-I-T-I-D-E?

Daniel: The MIT Initiative on the Digital Economy, Erik Bryjnolffson’s research group. I was one of Erik's PhD students. My first year, we had a seminar speaker from the Computer Science and Artificial Intelligence Lab, CSAIL. John Leonard was talking about self-driving cars, and he came out there, and he said, “Look, Google's cheating. They're putting sensors in the road. We're building the real deal: cars that can drive themselves in all sorts of different circumstances. And let me be real with all of you. This is not going to be happening anytime soon. It will be decades.”

And there were other people who were knowledgeable about the subject saying, “No, it's coming in like 5 to 10 years.”

And at that point I thought to myself, “Well, if all these really brilliant people can disagree about what's going to happen, surely there's something cool here to try to understand.”

As you're going through econometrics classes, I wouldn't say econometrics is the same thing as AI. We could debate that, but there's enough of an overlap that I could kind of get my head around the optimization routines and things going on in the backend of the AI models and thought, “Well, this is a cool place to learn a lot and, at the same time, maybe say something that other people haven't dug into yet.”

Andrey: Yeah. Very cool. So, with that, I think maybe you can tell us a little bit about your paper GPTs, which is a paper that has had an enormous amount of attention over the years and I think has been quite influential.

Daniel: Yeah, we've been lucky in that sense.

Seth: In two years.

Andrey: that's not—I mean—some version of it was out earlier… No…. Or is it? Has it only really been two years?

Daniel: It has been the longest, , Andrey. If you and I weren't already sort of bald, , it might've been a time period for us to go bald. Yeah, we put it out in March of 2023. I had a little bit of early access to GPT-4. My co-authors can attest to the fact that I rather annoyingly tried to get GPT-4 to delete itself for the first week or two that I had it rather than doing the research we needed to. But yeah, it's only been about two and a half. Okay, so the paper, as I describe it, at least recently, has kind of got a Dickensian quality to it. There is a pessimistic component, there's an optimistic component, and there's a realistic component to it.

So I'll start with the pessimistic, or I'll— why don't I just start with what we do here first? So we go through O*Net's list of tasks., There are 20,000 tasks in O*NET, and for each one of those tasks, we ask a set of humans who are working with OpenAI; they kind of understand what large language models in general are capable of doing.

What would help you cut that time in half? So could you cut the time to do this task in half with a large language model with no drop in quality? And there are three answers. One answer is of course not; that's like flipping a burger or something. Maybe we get large language models imbued into robotics technologies at some point in the future, but it's not quite there yet.

Another answer is, of course, you can. This would be like writing an email or processing billing details or an invoice.

And then there's the middle one, which we call E2. So, E0 is no, E1 is yes, and E2 is yes, you could, but we're going to need to build some additional software and systems around it.

So there's a gain to be had there, but it's not like LLMs are the only compo

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

What can we learn from AI exposure measures?

What can we learn from AI exposure measures?

Andrey Fradkin and Seth Benzell