DiscoverJustified PosteriorsDoes AI Cheapen Talk? (Bo Cowgill Pt. 1)
Does AI Cheapen Talk? (Bo Cowgill Pt. 1)

Does AI Cheapen Talk? (Bo Cowgill Pt. 1)

Update: 2025-11-18
Share

Description

In this episode, we brought on our friend Bo Cowgill, to dissect his forthcoming Management Science paper, Does AI Cheapen Talk? The core question is one economists have been circling since Spence drew a line on the blackboard: What happens when a technology makes costly signals cheap? If GenAI allows anyone to produce polished pitches, résumés, and cover letters, what happens to screening, hiring, and the entire communication equilibrium?

Bo’s answer: it depends. Under some conditions, GenAI induces an epistemic apocalypse, flattening signals and confusing recruiters. In others, it reveals skill even more sharply, giving high-types superpowers. The episode walks through the theory, the experiment, and implications.

Transcript:Seth: Welcome to the Justified Posteriors Podcast, the podcast that updates its priors about the economics of AI and technology. I’m Seth Benzell, certifying my humanity with takes so implausible that no softmax could ever select them at Chapman University in sunny Southern California.

Andrey: And I am Andrey Fradkin, collecting my friends in all sorts of digital media formats, coming to you from San Francisco, California. Today we’re very excited to have Bo Cowgill with us. Bo is a friend of the show and a listener of the show, so it’s a real treat to have him. He is an assistant professor at Columbia Business School and has done really important research on hiring, on prediction markets, and now on AI and the intersection of those topics. And he’s also won some very cool prizes. I’ll mention that he was on the list of the best 40 business school professors. So he is one of those professors that’s really captivating for his students. So yeah. Welcome, Bo.

Bo Cowgill: Thank you so much. It’s awesome to be here. Thanks so much for having me on the podcast.

Seth: What do you value about the podcast? That’s something I’ve been trying to figure out because I just do the podcast for me. I’m just having a lot of fun here with Andrey. Anything I can do to get this guy’s attention to talk about interesting stuff for 10 minutes? Why do you like the podcast? What can we do to make this an even better podcast for assistant professors at Columbia?

Bo Cowgill: Well, I don’t wanna speak for all assistant professors at Columbia, but one thing it does well is aggregate papers about AI that are coming out from around the ecosystem and random places. I think it’s hard for anybody to catch all of these, so you guys do a great job. I did learn about new papers from the podcast sometimes.

Another cool thing I think is there is some continuity across podcast episodes about themes and arbitrage between different topics and across even different disciplines and domains. So I think this is another thing you don’t get necessarily just kind of thumbing around papers yourself.

Seth: So flattering. So now I can ask you a follow-up question, which is: obviously you’re enjoying our communication to you. A podcast is kind of a one-dimensional communication. Now we’ve got the interview going, we’ve got this back and forth. How would you think about the experience of the podcast changing if a really, really, really good AI that had read all of my papers and all of Andrey’s papers went and did the same podcast, same topics? How would that experience change for you? Would it have as much informative content? Would it have as much experiential value? How do you think about that?

Bo Cowgill: Well, first of all, I do enjoy y’all’s banter back and forth. I don’t know how well an AI would do that. Maybe it would do a perfectly good job with that. I do enjoy the fact that—this is personal to me—but we know a lot of the same people. And in addition to other guests and other paper references, I like to follow some of the inside jokes and whatnot. I don’t know if that’s all that big of a deal for the average person. But I have listened to at least the latest version of NotebookLM and its ability to do a quote-unquote “deep dive podcast” on anything. And at least recently I’ve been pleased with those. I don’t know if you’ve ever tried putting in like a bad paper in theirs, and then it will of course just say, “Oh, this is the greatest paper. It’s so interesting.”

Seth: Right.

Bo Cowgill: You can.

Seth: So that’s a little bit different, maybe slightly different than our approach.

Bo Cowgill: Well, yeah, for sure. Although you can also tell NotebookLM to try to find problems and be a little bit more critical. And that I think works well too. But yeah, I don’t think we should try to replace you guys with robots just yet.

Seth: We’re very highly compensated though. The opportunity cost of Andrey’s time, he could be climbing a mountain right now. Andrey, you take it up. Why are we doing this ourselves? Why isn’t an LLM doing this communication for us?

Andrey: Well, mostly it’s because we have fun doing it, and so if the LLM was doing it, then we wouldn’t be having the fun.

Seth: There you go. Well put. Experiential value of the act itself. Now, Bo, I did not bring up this question randomly. The reason I raised this question of how does AI modify communication... yeah, I used a softmax process, so it was not random. The reason I’m asking this question about how AI changes communication is because you have some recently accepted, forthcoming work at Management Science trying to bring some theory and empirics to the question of how LLMs change human communication, but now in the context of resumes and job search and job pitches. Do you want to briefly introduce the paper “Does AI Cheapen Talk?” and tell us about your co-authors?

Bo Cowgill: Yeah, most definitely. So the paper is called “Does AI Cheapen Talk?”. It is with Natalia Berg-Wright, also at Columbia Business School, and with Pablo Hernandez Lagos, who is a professor at Yeshiva University. And what we’re looking at in this paper is the way people screen job candidates or screen entrepreneurs or, more abstractly, how they kind of screen generally. You could apply our model, I think, to lots of different things.

But the core idea behind it kind of goes back to these models from Spence in the 1970s saying that costly signals are more valuable to try to separate types.

Seth: Right. If I wanna become a full member of the tribe, I have to go kill a lion. Why is it important for me to kill a lion? It’s not important. The important part is I do a hard thing.

Bo Cowgill: Exactly. Yeah. So maybe part of the key to this Spence idea that appears in our paper too is that it’s not just that the signal has to be costly, it has to be kind of differentially costly for different types of people. So maybe in your tribe, killing a lion is easy for tough guys like you, but for wimpier people or something, it’s prohibitively high. And so it’s like a test of your underlying cost parameter for killing lions or for being tough in general. So they go and do this. And I guess what you’re alluding to, which appears in a lot of cases, is the actual value of killing the lion is kind of irrelevant. It was just a test.

And maybe one of the more potentially depressing implications of that is the idea that what we send our students to do in four-year degrees or even degrees like ours is really just as valuable as killing a lion, which is to say, you’re mainly revealing something about your own costs and your own type and your own skills, and the actual work doesn’t generate all that much value.

Seth: Is education training or screening?

Bo Cowgill: Right, right, right. Yes. I do think a good amount of it these days is probably screening, and maybe that’s especially true at the MBA level.

Andrey: I would just say that, given the rate of hiring for MBAs, I’m not sure that the screening is really happening either. Maybe the screening is happening to get in.

Bo Cowgill: What the screening function is now is like, can you get in as the ultimate thing?

Seth: Right. And I think as you already suggest, the way this works can flip if there’s a change in opportunity costs, right? So maybe in the past, “Oh, I’m the high type. I go to college.” In the present, “I’m the high type. I’m gonna skip college, I’m gonna be an entrepreneur,” and now going to college is a low signal.

Bo Cowgill: Yes. Exactly. So that’s kind of what’s going on in our model too. How are we applying this to job screening and AI? Well, you apply for a job, you have a resume, possibly a cover letter or, if you don’t have an old-fashioned cover letter, you probably have a pitch to a recruiter or to your friend who works at the company. And there are kind of elements of costly signaling in those pitches. So some people could have really smart-sounding pitches that use the right jargon and are kind of up to speed with regards to the latest developments in the industry or in the underlying technology or whatever. And those could actually be really useful signals because the only sort of person who would be up to speed is the one who finds it easy to follow all this information.

Seth: Can I pause you for a second? Back before LLMs, when I was in high school, they helped me make a CV or a resume. It’s not like there was ever any

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Does AI Cheapen Talk? (Bo Cowgill Pt. 1)

Does AI Cheapen Talk? (Bo Cowgill Pt. 1)

Andrey Fradkin and Seth Benzell