Pigeon Hour

Pigeon Hour

Recorded conversations; a minimal viable pod <br/><br/><a href="https://www.aaronbergman.net?utm_medium=podcast">www.aaronbergman.net</a>

Preparing for the Intelligence Explosion (paper readout and commentary)

Preparing for the Intelligence Explosion is a recent paper by Fin Moorhouse and Will MacAskill.* 00:00 - 1:58:04 is me reading the paper.* 1:58:05 - 2:26:06 is a string of random thoughts I have related to itI am well-aware that I am not the world's most eloquent speaker lol. This is also a bit of an experiment in getting myself to read something by reading it out loud. Maybe I’ll do another episode like this (feel free to request papers/other things to read out, ideally a bit shorter than this one lol)Below are my unfiltered, unedited, quarter-baked thoughts.My unfiltered, unedited, quarter-baked thoughtsOkay, this is Aaron.I'm in post-prod, as we say in the industry, and I will just spitball some random thoughts, and then I'm not even with my computer right now, so I don't even have the text in front of me.I feel like my main takeaway is that the vibes debate is between normal to AI is as important as the internet, maybe. That's on the low end, to AI is a big deal. But if you actually do the not math, all approximately all of the variation is actually just between insane and insane to the power of insane. And I don't fully know what to do with that.I guess, to put a bit more of a point on it, I'm not just talking about point estimates. It seems that even if you make quite conservative assumptions, it's quite overdetermined that there will be something explosive technological progress unless something really changes. And that is just, yeah, that is just a big deal. It's not one that I think of fully incorporated into my emotional worldview. I mean, I have it, I think, in part, but not, not to the degree that I think my, my intellect has.So another thing is that one of the headline results, something that Will MacAskill, I think, wants to emphasize and did emphasize in the paper, is the century in a decade meme. But if you actually read the paper, that is kind of a lower bound, unless something crazy happens. And I'll, this is me editorializing right now.So, I think something crazy could happen first, for example, nuclear war with China, that would destroy data centers and mean that, you know, AI progress is significantly set back, or it's an unknown unknown. But the century in a decade is really a truly a lower bound. You need to be super pessimistic with all the in-model uncertainty. Obviously there's out of model uncertainty, but the actual point estimates, whether you take geometric, however you do it, arithmetic means over distributions, or geometric means, however you combine the variables, you actually get much much faster than that.So that is a 10x speed up, and that is, yeah, as I said 10 times, as pessimistic as you can get, I don't actually have a good enough memory to remember exactly what the point estimate numbers are. I should go back and look.So chatting with Claude, it seems that there's actually a lot of different specific numbers and things. So one question you might have is, okay, over the fastest growing decade in terms of technological progress or economic growth in the next 10 decades, what will the peak average growth rate be? But there's a lot of different ways you can play with that to change it. It's, oh, what's the average going to be over the next decade? What about this coming decade? What about before 2030? Do we're talking about economic progress, progress or some less well-defined sense of technological and social progress.But basically it seems the conservative scenario is, is that the intelligence explosion happens and at some, in some importantly long series of years, you get a 5x year over year. So not a doubling every year, but after two years, you get a 25x expansion of, of AI labor. And then 125 after three years. And I need to look back. I think one thing they don't talk about specifically is, oh yeah, sorry.They do talk about one important thing to emphasize. And as you can tell, I'm not the most eloquent person in the world. Is that they talk about pace significantly and about limiting factors. But the third, the thing you might solve for, if you know those two variables is the length of time that such an explosion might take place across and just talking, thinking out loud, that is something that they, whether intentionally or otherwise, or me being dumb and missing it. I don't think that they give a ton of attention to, and that's yeah. I mean, my intuition is approximately fine.Does it matter if the intelligence explosion conditional on conditional on knowing how to distribution of rates of say blocks of years, say, so we're not talking about seconds, we're not talking about, I guess we could be talking about months, but we're not talking about weeks, and we're not talking about multiple decades.So we're talking about something in the realm of single digit to double digit numbers of years, maybe a fraction of a year. So two ish, three orders of magnitude of range. And so the question is, conditional on having a distribution of peak average growth rate for some block of time. Does it matter whether we're talking about two years, or 10 years or what? And sorry, backtracking, also conditional on having a distribution for the limiting factors.So at what point do you stop scaling? Because we know that there's the talking point, infinite growth in a finite world is true. They're just off by 1000 orders of magnitude, or maybe 100. So there actually are genuine limiting factors. And they discussed this, at what point you might get true limits on power consumption or whatever.But yeah, just to recap this little mini ramble. We don't, one thing the paper doesn't go over much is the length of time specifically, except insofar as that is implied by distributions you have for peak growth rates and limiting factors.So another thing that wasn't in the paper, but that was, I'm just spitballing that was in Will MacAskill recent interview on the 80,000 hours podcast with Robert Roeblin about the world's most pressing problems and how you can use your career to solve them. Is that, yeah, I think Rob said this, he wishes that the AIX community hadn't been so tame or timid, in terms of hedging, saying, emphasizing uncertainty, saying, you know, there's a million ways it can be wrong, which is of course true. But I think his, the takeaway he was trying to get at was, even ex-ante, they should have been a little bit more straightforward.And I actually kind of think there's a reasonable critique of this paper, which is that the century in a decade meme is not a good approximation of the actual expectations, you know, the expectations is something like 100 to 1000x, not a 10x speed up. As lucky as a reasonable conservative baseline, you have to be really within model pessimistic to get to the 10x point.Another big thing to comment on is just the grand challenges. And so I've been saying for a while that my P doom, as they say, is something in the 50% range. Maybe now it's 60% or something after reading this paper up from 35% right after the bottom executive order. And what I mean by that, I actually think is some sort of loose sense of, no, we actually don't solve all these challenges.Well, so it's not one thing MacAskill and Morehouse emphasize, but in both the podcast that I listened to and the paper is it's not just about AI control. It's not just about the alignment problem. You really have to get a lot of things right. I think this relates to other work that MacAskill is on that I'm not super well acquainted with. But there's the question of how much do you have to get right in order for the future to go well. And actually think there's a lot of strands there. Like I remember on the podcast with Rob, that we're talking in terms of percentage, percentage value of the best outcome. I'm not, yeah, I'm just thinking out loud here, but I'm not actually sure that's the right metric to go with.It's a little bit like, so you can imagine just we have the current set of possibilities and then exogenously we get one future strand in the multiverse, the Everettian multiverse. And a single Everettian multiverse thread points to the future going a billion times better than it could otherwise. I feel like this approximately should not change approximately anything because you know it's not going to happen. But it does revise down those numbers, your estimate of the expected value, the expected percentage of the best future, it revises that down a billion fold.And so this sort of, no I'm not actually sure if this ends up cashing, I'm just not smart enough to intuit well whether this ends up cashing out in terms of what you should do. But I suspect that it might, that's really just an intuition, so yeah I'm not sure.You know something that will never be said about me is that I am an extremely well organized and straightforward thinker. So it might be worth noting these audio messages are just random things that come to mind as I'm walking around basically a park. Also that's why the audio quality might be worse.Oh yeah getting back to what I was originally thinking about with the grand challenges and my P. Doom. They just enumerate a bunch of things that in my opinion really do have to go right in order for some notion of the future to be good. And so there's just a concatenation, I forget what the term is, but a concatenation issue of even if you're relatively optimistic and I kind of don't know if you should be on any one issue.Like okay, so some of these, let me just list them off. AI takeover, highly destructive technologies, power concentrating mechanisms, value lock-in mechanisms, AI agents and digital minds, space governance, new competitive pressures, epistemic disruption, abundance, so capturing the upside and unknown unknowns. No, they're not, it's not as clean a model as each of these are fully independent. It's much more complex than that, but it's not as simple as you just, oh, if you have a 70% chance on each, you can just take that to the power of eight or however many there are and ge

03-25
02:25:21

#13: Max Alexander and I debate whether total utilitarianism implies the very repugnant conclusion

The gang from Episode 10 is back, with yet another Consistently Candid x Pigeon Hour crossoverAs Sarah from Consistently Candid describes:In this episode, Aaron Bergman and Max Alexander are back to battle it out for the philosophy crown, while I (attempt to) moderate. They discuss the Very Repugnant Conclusion, which, in the words of Claude, "posits that a world with a vast population living lives barely worth living could be considered ethically inferior to a world with an even larger population, where most people have extremely high quality lives, but a significant minority endure extreme suffering." Listen to the end to hear my uninformed opinion on who's right.- Listen to Consistently Candid on Apple Podcasts, Spotify, or via RSS- My blog post on suffering-focused utilitarianism- Follow Max on Twitter and check out his blog- Follow Sarah on Twitter Get full access to Aaron's Blog at www.aaronbergman.net/subscribe

09-10
01:54:24

#12: Arthur Wright and I discuss whether the Givewell suite of charities are really the best way of helping humans alive today, the value of reading old books, rock climbing, and more

Please follow Arthur on Twitter and check out his blog! Thank you for just summarizing my point in like 1% of the words-Aaron, to Arthur, circa 34:45Summary(Written by Claude Opus aka Clong)* Aaron and Arthur introduce themselves and discuss their motivations for starting the podcast. Arthur jokingly suggests they should "solve gender discourse".* They discuss the benefits and drawbacks of having a public online persona and sharing opinions on Twitter. Arthur explains how his views on engaging online have evolved over time.* Aaron reflects on whether it's good judgment to sometimes tweet things that end up being controversial. They discuss navigating professional considerations when expressing views online.* Arthur questions Aaron's views on cause prioritization in effective altruism (EA). Aaron believes AI is one of the most important causes, while Arthur is more uncertain and pluralistic in his moral philosophy.* They debate whether standard EA global poverty interventions are likely to be the most effective ways to help people from a near-termist perspective. Aaron is skeptical, while Arthur defends GiveWell's recommendations.* Aaron makes the case that even from a near-termist view focused only on currently living humans, preparing for the impacts of AI could be highly impactful, for instance by advocating for a global UBI. Arthur pushes back, arguing that AI is more likely to increase worker productivity than displace labor.* Arthur expresses skepticism of long-termism in EA, though not due to philosophical disagreement with the basic premises. Aaron suggests this is a well-trodden debate not worth rehashing.* They discuss whether old philosophical texts have value or if progress means newer works are strictly better. Arthur mounts a spirited defense of engaging with the history of ideas and reading primary sources to truly grasp nuanced concepts. Aaron contends that intellectual history is valuable but reading primary texts is an inefficient way to learn for all but specialists.* Arthur and Aaron discover a shared passion for rock climbing, swapping stories of how they got into the sport as teenagers. While Aaron focused on indoor gym climbing and competitions, Arthur was drawn to adventurous outdoor trad climbing. They reflect on the mental challenge of rationally managing fear while climbing.* Discussing the role of innate talent vs training, Aaron shares how climbing made him viscerally realize the limits of hard work in overcoming genetic constraints. He and Arthur commiserate about the toxic incentives for competitive climbers to be extremely lean, while acknowledging the objective physics behind it.* They bond over falling out of climbing as priorities shifted in college and lament the difficulty of getting back into it after long breaks. Arthur encourages Aaron to let go of comparisons to his past performance and enjoy the rapid progress of starting over.TranscriptVery imperfect - apologies for the errors.AARONHello, pigeon hour listeners. This is Aaron, as it always is with Arthur Wright of Washington, the broader Washington, DC metro area. Oh, also, we're recording in person, which is very exciting for the second time. I really hope I didn't screw up anything with the audio. Also, we're both being really awkward at the start for some reason, because I haven't gotten into conversation mode yet. So, Arthur, what do you want? Is there anything you want?ARTHURYeah. So Aaron and I have been circling around the idea of recording a podcast for a long time. So there have been periods of time in the past where I've sat down and been like, oh, what would I talk to Aaron about on a podcast? Those now elude me because that was so long ago, and we spontaneously decided to record today. But, yeah, for the. Maybe a small number of people listening to this who I do not personally already know. I am Arthur and currently am doing a master's degree in economics, though I still know nothing about economics, despite being two months from completion, at least how I feel. And I also do, like, housing policy research, but I think have, I don't know, random, eclectic interests in various EA related topics. And, yeah, I don't. I feel like my soft goal for this podcast was to, like, somehow get Aaron cancelled.AARONI'm in the process.ARTHURWe should solve gender discourse.AARONOh, yeah. Is it worth, like, discussing? No, honestly, it's just very online. It's, like, not like there's, like, better, more interesting things.ARTHURI agree. There are more. I was sort of joking. There are more interesting things. Although I do think, like, the general topic that you talked to max a little bit about a while ago, if I remember correctly, of, like, kind of. I don't know to what degree. Like, one's online Persona or, like, being sort of active in public, sharing your opinions is, like, you know, positive or negative for your general.AARONYeah. What do you think?ARTHURYeah, I don't really.AARONWell, your. Your name is on Twitter, and you're like.ARTHURYeah. You're.AARONYou're not, like, an alt.ARTHURYeah, yeah, yeah. Well, I. So, like, I first got on Twitter as an alt account in, like, 2020. I feel like it was during my, like, second to last semester of college. Like, the vaccine didn't exist yet. Things were still very, like, hunkered down in terms of COVID And I feel like I was just, like, out of that isolation. I was like, oh, I'll see what people are talking about on the Internet. And I think a lot of the, like, sort of more kind of topical political culture war, whatever kind of stuff, like, always came back to Twitter, so I was like, okay, I should see what's going on on this Twitter platform. That seems to be where all of the chattering classes are hanging out. And then it just, like, made my life so much worse.AARONWait, why?ARTHURWell, I think part of it was that I just, like, I made this anonymous account because I was like, oh, I don't want to, like, I don't want to, like, have any reservations about, like, you know, who I follow or what I say. I just want to, like, see what's going on and not worry about any kind of, like, personal, like, ramifications. And I think that ended up being a terrible decision because then I just, like, let myself get dragged into, like, the most ultimately, like, banal and unimportant, like, sort of, like, culture war shit as just, like, an observer, like, a frustrated observer. And it was just a huge waste of time. I didn't follow anyone interesting or, like, have any interesting conversations. And then I, like, deleted my Twitter. And then it was in my second semester of my current grad program. We had Caleb Watney from the Institute for Progress come to speak to our fellowship because he was an alumni of the same fellowship. And I was a huge fan of the whole progress studies orientation. And I liked what their think tank was doing as, I don't know, a very different approach to being a policy think tank, I think, than a lot of places. And one of the things that he said for, like, people who are thinking about careers in, like, policy and I think sort of applies to, like, more ea sort of stuff as well, was like, that. Developing a platform on Twitter was, like, opened a lot of doors for him in terms of, like, getting to know people in the policy world. Like, they had already seen his stuff on Twitter, and I got a little bit, like, more open to the idea that there could be something constructive that could come from, like, engaging with one's opinions online. So I was like, okay, fuck it. I'll start a Twitter, and this time, like, I won't be a coward. I won't get dragged into all the worst topics. I'll just, like, put my real name on there and, like, say things that I think. And I don't actually do a lot of that, to be honest.AARONI've, like, thought about gotta ramp it.ARTHUROff doing more of that. But, like, you know, I think when it's not eating too much time into my life in terms of, like, actual deadlines and obligations that I have to meet, it's like, now I've tried to cultivate a, like, more interesting community online where people are actually talking about things that I think matter.AARONNice. Same. Yeah, I concur. Or, like, maybe this is, like, we shouldn't just talk about me, but I'm actually, like, legit curious. Like, do you think I'm an idiot or, like, cuz, like, hmm. I. So this is getting back to the, like, the current, like, salient controversy, which is, like, really just dumb. Not, I mean, controversy for me because, like, not, not like an actual, like, event in the world, but, like, I get so, like, I think it's, like, definitely a trade off where, like, yeah, there's, like, definitely things that, like, I would say if I, like, had an alt. Also, for some reason, I, like, really just don't like the, um, like, the idea of just, like, having different, I don't know, having, like, different, like, selves. Not in, like, a. And not in, like, any, like, sort of actual, like, philosophical way, but, like, uh, yeah, like, like, the idea of, like, having an online Persona or whatever, I mean, obviously it's gonna be different, but, like, in. Only in the same way that, like, um, you know, like, like, you're, like, in some sense, like, different people to the people. Like, you're, you know, really close friend and, like, a not so close friend, but, like, sort of a different of degree. Like, difference of, like, degree, not kind. And so, like, for some reason, like, I just, like, really don't like the idea of, like, I don't know, having, like, a professional self or whatever. Like, I just. Yeah. And you could, like, hmm. I don't know. Do you think I'm an idiot for, like, sometimes tweeting, like, things that, like, evidently, like, are controversial, even if they, like, they're not at all intent or, like, I didn't even, you know, plan, like, plan on them being.ARTHURYeah, I think it's, like, sort of similar to the, like, decoupling conversation we had the other night, which is, like, I

04-11
02:13:26

Drunk Pigeon Hour!

IntroAround New Years, Max Alexander, Laura Duffy, Matt and I tried to raise money for animal welfare (more specifically, the EA Animal Welfare Fund) on Twitter. We put out a list of incentives (see the pink image below), one of which was to record a drunk podcast episode if the greater Very Online Effective Altruism community managed to collectively donate $10,000.To absolutely nobody’s surprise, they did ($10k), and then did it again ($20k) and then almost did it a third time ($28,945 as of March 9, 2024). To everyone who gave or helped us spread the word, and on behalf of the untold number of animals these dollars will help, thank you.And although our active promotion on Twitter has come to an end, it is not too late to give! I give a bit more context in a short monologue intro I recorded (sober) after the conversation, so without further ado, Drunk Pigeon Hour:Transcript(Note: very imperfect - sorry!)MonologueHi, this is Aaron. This episode of Pigeon Hour is very special for a couple of reasons.The first is that it was recorded in person, so three of us were physically within a couple feet of each other. Second, it was recorded while we were drunk or maybe just slightly inebriated. Honestly, I didn't get super drunk, so I hope people forgive me for that.But the occasion for drinking was that this, a drunk Pigeon Hour episode, was an incentive for a fundraiser that a couple of friends and I hosted on Twitter, around a little bit before New Year's and basically around Christmas time. We basically said, if we raise $10,000 total, we will do a drunk Pigeon Hour podcast. And we did, in fact, we are almost at $29,000, just shy of it. So technically the fundraiser has ended, but it looks like you can still donate. So, I will figure out a way to link that.And also just a huge thank you to everyone who donated. I know that's really cliche, but this time it really matters because we were raising money for the Effective Altruism Animal Welfare Fund, which is a strong contender for the best use of money in the universe.Without further ado, I present me, Matt, and Laura. Unfortunately, the other co-host Max was stuck in New Jersey and so was unable to participate tragically. Yeah so here it is!ConversationAARONHello, people who are maybe listening to this. I just, like, drank alcohol for, like, the first time in a while. I don't know. Maybe I do like alcohol. Maybe I'll find that out now.MATTUm, All right, yeah, so this is, this is Drunk Pigeon Hour! Remember what I said earlier when I was like, as soon as we are recording, as soon as we press record, it's going to get weird and awkward.LAURAI am actually interested in the types of ads people get on Twitter. Like, just asking around, because I find that I get either, like, DeSantis ads. I get American Petroleum Institute ads, Ashdale College.MATTWeirdly, I've been getting ads for an AI assistant targeted at lobbyists. So it's, it's like step up your lobbying game, like use this like tuned, I assume it's like tuned ChatGPT or something. Um, I don't know, but it's, yeah, it's like AI assistant for lobbyists, and it's like, like, oh, like your competitors are all using this, like you need to buy this product.So, so yeah, Twitter thinks I'm a lobbyist. I haven't gotten any DeSantis ads, actually.AARONI think I might just like have personalization turned off. Like not because I actually like ad personalization. I think I'm just like trying to like, uh, this is, this is like a half-baked protest of them getting rid of circles. I will try to minimize how much revenue they can make from me.MATTSo, so when I, I like went through a Tumblr phase, like very late. In like 2018, I was like, um, like I don't like, uh, like what's happening on a lot of other social media.Like maybe I'll try like Tumblr as a, as an alternative.And I would get a lot of ads for like plus-sized women's flannels.So, so like the Twitter ad targeting does not faze me because I'm like, oh, okay, like, I can, hold on.AARONSorry, keep going. I can see every ad I've ever.MATTCome across, actually, in your giant CSV of Twitter data.AARONJust because I'm a nerd. I like, download. Well, there's actually a couple of things. I just download my Twitter data once in a while. Actually do have a little web app that I might try to improve at some point, which is like, you drop it in and then it turns them. It gives you a csV, like a spreadsheet of your tweets, but that doesn't do anything with any of the other data that they put in there.MATTI feel like it's going to be hard to get meaningful information out of this giant csv in a short amount of time.AARONIt's a giant JSON, actually.MATTAre you just going to drop it all into c long and tell it to parse it for you or tell it to give you insights into your ads.AARONWait, hold on. This is such a.MATTWait. Do people call it “C-Long” or “Clong”?AARONWhy would it be long?MATTWell, because it's like Claude Long.LAURAI've never heard this phrase.MATTThis is like Anthropic’s chat bot with a long context with so like you can put. Aaron will be like, oh, can I paste the entire group chat history?AARONOh yeah, I got clong. Apparently that wasn't acceptable so that it.MATTCan summarize it for me and tell me what's happened since I was last year. And everyone is like, Aaron, don't give our data to Anthropic, is already suss.LAURAEnough with the impressions feel about the Internet privacy stuff. Are you instinctively weirded out by them farming out your personal information or just like, it gives me good ads or whatever? I don't care.MATTI lean a little towards feeling weird having my data sold. I don't have a really strong, and this is probably like a personal failing of mine of not having a really strong, well formed opinion here. But I feel a little sketched out when I'm like all my data is being sold to everyone and I don't share. There is this vibe on Twitter that the EU cookies prompts are like destroying the Internet. This is regulation gone wrong. I don't share that instinct. But maybe it's just because I have average tolerance for clicking no cookies or yes cookies on stuff. And I have this vibe that will.AARONSketch down by data. I think I'm broadly fine with companies having my information and selling it to ad targeting. Specifically. I do trust Google a lot to not be weird about it, even if it's technically legal. And by be weird about it, what do mean? Like, I don't even know what I mean exactly. If one of their random employees, I don't know if I got into a fight or something with one of their random employees, it would be hard for this person to track down and just see my individual data. And that's just a random example off the top of my head. But yeah, I could see my view changing if they started, I don't know, or it started leaching into the physical world more. But it seems just like for online ads, I'm pretty cool with everything.LAURAHave you ever gone into the ad personalization and tried see what demographics they peg you?AARONOh yeah. We can pull up mine right now.LAURAIt's so much fun doing that. It's like they get me somewhat like the age, gender, they can predict relationship status, which is really weird.AARONThat's weird.MATTDid you test this when you were in and not in relationships to see if they got it right?LAURANo, I think it's like they accumulate data over time. I don't know. But then it's like we say that you work in a mid sized finance. Fair enough.MATTThat's sort of close.LAURAYeah.AARONSorry. Keep on podcasting.LAURAOkay.MATTDo they include political affiliation in the data you can see?AARONOkay.MATTI would have been very curious, because I think we're all a little bit idiosyncratic. I'm probably the most normie of any of us in terms of. I can be pretty easily sorted into, like, yeah, you're clearly a Democrat, but all of us have that classic slightly. I don't know what you want to call it. Like, neoliberal project vibe or, like, supply side. Yeah. Like, some of that going on in a way that I'm very curious.LAURAThe algorithm is like, advertising deSantis.AARONYeah.MATTI guess it must think that there's some probability that you're going to vote in a republican primary.LAURAI live in DC. Why on earth would I even vote, period.MATTWell, in the primary, your vote is going to count. I actually would think that in the primary, DC is probably pretty competitive, but I guess it votes pretty. I think it's worth.AARONI feel like I've seen, like, a.MATTI think it's probably hopeless to live. Find your demographic information from Twitter. But, like.AARONAge 13 to 54. Yeah, they got it right. Good job. I'm only 50, 99.9% confident. Wait, that's a pretty General.MATTWhat's this list above?AARONOh, yeah. This is such a nerd snipe. For me, it's just like seeing y'all. I don't watch any. I don't regularly watch any sort of tv series. And it's like, best guesses of, like, I assume that's what it is. It thinks you watch dune, and I haven't heard of a lot of these.MATTWait, you watch cocaine there?AARONBig bang theory? No, I definitely have watched the big Bang theory. Like, I don't know, ten years ago. I don't know. Was it just, like, random korean script.MATTOr whatever, when I got Covid real bad. Not real bad, but I was very sick and in bed in 2022. Yeah, the big bang theory was like, what I would say.AARONThese are my interest. It's actually pretty interesting, I think. Wait, hold on. Let me.MATTOh, wait, it's like, true or false for each of these?AARONNo, I think you can manually just disable and say, like, oh, I'm not, actually. And, like, I did that for Olivia Rodrigo because I posted about her once, and then it took over my feed, and so then I had to say, like, no, I'm not interested in Olivia Rodrigo.MATTWait, can you control f true here? Because almost all of these. Wait, sorry. Is that argentine politics?AARONNo, it's just this.MATTOh, wait, so it thinks you have no interest?AARONNo, this is disabled, so I haven't. And for some reason, this isn't the

03-09
01:35:53

Best of Pigeon Hour

Table of contentsNote: links take you to the corresponding section below; links to the original episode can be found there.* Laura Duffy solves housing, ethics, and more [00:01:16]* Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one [00:10:47]* Nathan Barnard on how financial regulation can inform AI regulation [00:17:16]* Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more [00:27:48]* Nathan Barnard (again!) on why general intelligence is basically fake [00:34:10]* Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense) [00:56:54]* Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can [01:04:00]* Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all [01:24:43]* Sarah Woodhouse on discovering AI x-risk, Twitter, and more [01:30:56] * Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism with Max Alexander and Sarah Hastings-Woodhouse [01:41:08]Intro [00:00:00]To wrap up the year of Pigeon Hour, the podcast, I put together some clips from each episode to create a best-of compilation. This was inspired by 80,000 Hours, a podcast that did the same with their episodes, and I thought it was pretty cool and tractable enough.It's important to note that the clips I chose range in length significantly. This does not represent the quality or amount of interesting content in the episode. Sometimes there was a natural place to break the episode into a five-minute chunk, and other times it wouldn't have made sense to take a five-minute chunk out of what really needed to be a 20-minute segment. I promise I'm not just saying that.So without further ado, please enjoy.#1: Laura Duffy solves housing, ethics, and more [00:01:16]In this first segment, Laura, Duffy, and I discuss the significance and interpretation of Aristotle's philosophical works in relation to modern ethics and virtue theory.AARON: Econ is like more interesting. I don't know. I don't even remember of all the things. I don't know, it seems like kind of cool. Philosophy. Probably would have majored in philosophy if signaling wasn't an issue. Actually, maybe I'm not sure if that's true. Okay. I didn't want to do the old stuff though, so I'm actually not sure. But if I could aristotle it's all wrong. Didn't you say you got a lot out of Nicomachi or however you pronounce that?LAURA: Nicomachian ethics guide to how you should live your life. About ethics as applied to your life because you can't be perfect. Utilitarians. There's no way to be that.AARON: But he wasn't even responding to utilitarianism. I'm sure it was a good work given the time, but like, there's like no other discipline in which we care. So people care so much about like, what people thought 2000 years ago because like the presumption, I think the justified presumption is that things have iterated and improved since then. And I think that's true. It's like not just a presumption.LAURA: Humans are still rather the same and what our needs are for living amongst each other in political society are kind of the same. I think America's founding is very influenced by what people thought 2000 years ago.AARON: Yeah, descriptively that's probably true. But I don't know, it seems like all the whole body of philosophers have they've already done the work of, like, compressing the good stuff. Like the entire academy since like, 1400 or whatever has like, compressed the good stuff and like, gotten rid of the bad stuff. Not in like a high fidelity way, but like a better than chance way. And so the stuff that remains if you just take the state of I don't know if you read the Oxford Handbook of whatever it is, like ethics or something, the takeaways you're going to get from that are just better than the takeaways you're going to get from a summary of the state of the knowledge in any prior year. At least. Unless something weird happened. And I don't know. I don't know if that makes sense.LAURA: I think we're talking about two different things, though. Okay. In terms of knowledge about logic or something or, I don't know, argumentation about trying to derive the correct moral theory or something, versus how should we think about our own lives. I don't see any reason as to why the framework of virtue theory is incorrect and just because it's old. There's many virtue theorists now who are like, oh yeah, they were really on to something and we need to adapt it for the times in which we live and the kind of societies we live in now. But it's still like there was a huge kernel of truth in at least the way of thinking that Aristotle put forth in terms of balancing the different virtues that you care about and trying to find. I think this is true. Right? Like take one virtue of his humor. You don't want to be on one extreme where you're just basically a meme your entire life. Everybody thinks you're funny, but that's just not very serious. But you don't want to be a boar and so you want to find somewhere in the middle where it's like you have a good sense of humor, but you can still function and be respected by other people.AARON: Yeah. Once again, I agree. Well, I don't agree with everything. I agree with a lot of what you just said. I think there was like two main points of either confusion or disagreement. And like, the first one is that I definitely think, no, Aristotle shouldn't be discounted or like his ideas or virtue ethics or anything like that shouldn't be discounted because they were canonical texts or something were written a long time ago. I guess it's just like a presumption that I have a pretty strong presumption that conditional on them being good, they would also be written about today. And so you don't actually need to go back to the founding texts and then in fact, you probably shouldn't because the good stuff will be explained better and not in weird it looks like weird terms. The terms are used differently and they're like translations from Aramaic or whatever. Probably not Aramaic, probably something else. And yeah, I'm not sure if you.LAURA: Agree with this because we have certain assumptions about what words like purpose mean now that we're probably a bit richer in the old conception of them like telos or happiness. Right. Udaimnia is much better concept and to read the original text and see how those different concepts work together is actually quite enriching compared to how do people use these words now. And it would take like I don't know, I think there just is a lot of value of looking at how these were originally conceived because popularizers of the works now or people who are seriously doing philosophy using these concepts. You just don't have the background knowledge that's necessary to understand them fully if you don't read the canonical text.AARON: Yeah, I think that would be true. If you are a native speaker. Do you know Greek? If you know Greek, this is like dumb because then you're just right.LAURA: I did take a quarter of it.AARON: Oh God. Oh my God. I don't know if that counts, but that's like more than anybody should ever take. No, I'm just kidding. That's very cool. No, because I was going to say if you're a native speaker of Greek and you have the connotations of the word eudaimonia and you were like living in the temper shuttle, I would say. Yeah, that's true actually. That's a lot of nuanced, connotation and context that definitely gets lost with translation. But once you take the jump of reading English translations of the texts, not you may as well but there's nothing super special. You're not getting any privileged knowledge from saying the word eudaimonia as opposed to just saying some other term as a reference to that concept or something. You're absorbing the connotation in the context via English, I guess, via the mind of literally the translators who have like.LAURA: Yeah, well see, I tried to learn virtue theory by any other route than reading Aristotle.AARON: Oh God.LAURA: I took a course specifically on Plato and Aristotle.AARON: Sorry, I'm not laughing at you. I'm just like the opposite type of philosophy person.LAURA: But keep going. Fair. But she had us read his physics before we read Nicomachi.AARON: Think he was wrong about all that.LAURA: Stuff, but it made you understand what he meant by his teleology theory so much better in a way that I could not get if I was reading some modern thing.AARON: I don't know, I feel like you probably could. No, sorry, that's not true. I don't think you could get what Aristotle the man truly believed as well via a modern text. But is that what you? Depends. If you're trying to be a scholar of Aristotle, maybe that's important. If you're trying to find the best or truest ethics and learn the lessons of how to live, that's like a different type of task. I don't think Aristotle the man should be all that privileged in that.LAURA: If all of the modern people who are talking about virtue theory are basically Aristotle, then I don't see the difference.AARON: Oh, yeah, I guess. Fair enough. And then I would say, like, oh, well, they should probably start. Is that in fact the state of the things in virtue theory? I don't even know.LAURA: I don't know either.#2 Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one [00:10:47]All right, next, Arjun Panixery and I explore the effectiveness of reading books in retaining and incorporating knowledge, discussing the value of long form content and the impact of great literary works on understanding and shaping personal worldviews.ARJUN: Oh, you were in the book chat, though. The book rant group chat, right?AARON: Yeah, I think I might have just not read any of it. So do you want to fill me in on what I should have read?ARJUN: Yeah

01-24
01:47:33

#10: Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism* with Max Alexander and Sarah Hastings-Woodhouse

IntroAt the gracious invitation of AI Safety Twitter-fluencer Sarah Hastings-Woodhouse, I appeared on the very first episode of her new podcast “Consistently Candid” to debate moral realism (or something kinda like that, I guess; see below) with fellow philosophy nerd and EA Twitter aficionado Max Alexander, alongside Sarah as moderator and judge of sorts.What I believeIn spite of the name of the episode and the best of my knowledge/understanding a few days ago, it turns out my stance may not be ~genuine~ moral realism. Here’s my basic meta-ethical take:* Descriptive statements that concern objective relative goodness or badness (e.g., "it is objectively for Sam to donate $20 than to buy an expensive meal that costs $20 more than a similar, less fancy meal”) can be and sometimes are true; but* Genuinely normative claims like “Sam should (!) donate $20 and should not buy that fancy meal” are never objectively true.Of course the label per se doesn’t really matter. But for a bunch of reasons it still seems wise to figure out which label really does work best.Some definitionsStanford Encyclopedia of Philosophy: Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right. Moreover, they hold, at least some moral claims actually are true. That much is the common and more or less defining ground of moral realism (although some accounts of moral realism see it as involving additional commitments, say to the independence of the moral facts from human thought and practice, or to those facts being objective in some specified way).Wikipedia:The robust model of moral realism commits moral realists to three theses:* The semantic thesis: The primary semantic role of moral predicates (such as "right" and "wrong") is to refer to moral properties (such as rightness and wrongness), so that moral statements (such as "honesty is good" and "slavery is unjust") purport to represent moral facts, and express propositions that are true or false (or approximately true, largely false, and so on).* The alethic thesis: Some moral propositions are in fact true.* The metaphysical thesis: Moral propositions are true when actions and other objects of moral assessment have the relevant moral properties (so that the relevant moral facts obtain), where these facts and properties are robust: their metaphysical status, whatever it is, is not relevantly different from that of (certain types of) ordinary non-moral facts and properties.And to settle the matter, here’s what some LLM have to say.My prompt:Suppose a person believes the following:1: Descriptive statements that concern objective relative goodness or badness (e.g., "it is objectively for Sam to donate $20 than to buy an expensive meal that costs $20 more than a similar, less fancy meal”) can be and sometimes are true; but 2: Genuinely normative claims like “Sam should donate $20 and should not buy that fancy meal” are never objectively true.Is this person best categorized as a moral realist?ChatGPTGiven these two beliefs, the person's stance doesn't align completely with traditional moral realism, which typically asserts that there are objective truths about both moral facts (like the goodness or badness of actions) and moral duties (what one should or should not do). However, it also doesn't fully align with moral anti-realism, since the person believes in some form of objective truth regarding moral value.One possible categorization for this stance could be a form of "value realism" combined with "normative anti-realism" or "normative subjectivism." This would mean that the person believes in objective truths about what is better or worse (value realism) but thinks that what one ought to do (normative claims) is not a matter of objective truth but perhaps of subjective judgment or societal conventions.This position is quite sophisticated and might be best understood as a hybrid or intermediate position in the landscape of moral philosophy, rather than fitting neatly into the category of moral realism or its standard alternatives.Great, so much for trying to fit things into nice little buckets.Claude, fortunately, is more definitive:No, the view described is not best categorized as moral realism. Here's why:Moral realism is the view that there are objective moral facts or truths. A moral realist believes that some moral claims, including normative claims about what one should or shouldn't do, can be objectively and mind-independently true.The view in the example accepts that there can be objective truths about relative degrees of goodness or badness (claim 1). However, it denies that there are objective normative truths about what one should or shouldn't do (claim 2). By rejecting that genuine normative 'should/shouldn't' claims can be objectively true, the view denies a key commitment of moral realism - namely that moral claims can state objective normative facts. At best it allows for objective evaluative facts about goodness/badness, not prescriptive facts about obligations.So this view would be more accurately categorized as a form of moral anti-realism like non-cognitivism or error theory. It does not qualify as moral realism because of its rejection of the existence of objective normative truths that could make strong normative claims justified or true. The commitment to objective evaluative facts alone does not sufficiently qualify the view as realist in the metaethical sense.In summary, the view accepts objective "good/bad" facts but denies objective "should/shouldn't" facts. This combination puts it at odds with core commitments of moral realism, so it is better seen as an anti-realist view.I’m gonna disagree with Claude here, at least in that the bolded claim above (objective normative facts are a key part of moral realism) just seems…wrong? Like all the definitions could replace the term “moral fact” or “moral claim” with “normative fact” and “normative claim” but they do not!So I really don’t have a great answer to which label works best. One of two ~human level chatbots giving me a definitive-sounding “anti-realism” answer is too much evidence for me, whose only formal philosophy training is an undergrad minor, to rule that one out. There are also good arguments, I think, for the “realist label,” as well as for “neither” (i.e., ‘secret third thing’). In fact all of these seem pretty similar in terms of argument convincingness/correctness. So, in sum, 🤷‍♂️.Some tweets on the matter, both of which are images that link to the original:Ok now you can vote:Anyway, long story short, I believe something kinda sorta like moral realism, maybe, and that is what we discuss. Without further ado, here’s the (very imperfect) transcript of the episode. Please note that some of the long blocks of text have been cleaned up a bit via minor rewording. I am too lazy to do that for everything.TranscriptSarah: Hello, and welcome to my first experiment in low effort podcasting. In this episode of the podcast, which I'm now calling Consistently Candid because some people thought that was funny, I talked to Aaron Bergman and Max Alexander about moral realism. Sarah: They kind of debate it. And I, having read the Wikipedia page about five minutes previously, a, occasionally chime in with some opinions that I hadn't thought out very well. So enjoy!Sarah: Anyway, I guess this is my podcast now, but I don't have a name for it yet.Max: That's a good podcast nameSarah: Introduce it.Aaron: Can I broadcast this on Pigeon Hour as well?Sarah: Yeah, sure.Max: Okay, cool.Aaron: Sweet.Sarah: But I also want to make my own thing because people.Aaron: No, totally. But yeah, you can say no, you can copyright it and then sue me.Sarah: No. Well, that's fine. This is totally, like, anyone can broadcast it anywhere they want.Max: You can text or whatever, get on the Trump website.Sarah: Yeah. So you guys have a disagreement, apparently, about moral realism. I have briefly skimmed the Wikipedia page, and I don't have an opinion, but I thought we have it out.Aaron: No, I feel like the format should be that we try to convince you…Sarah: So, yeah, you try and convince me that you each try and convince me you're right, and I will come to a conclusion and let you know who I'm persuaded by. And if at any point I have, like, a thought that's worth articulating, I'll weigh in with that. But I think that's kind of unlikely because I don't really know anything; I'm playing a moderating role here.Max: Well, confusion is worth pointing out or something like that, right?Sarah: Yeah, I can do that at regular intervals. I can tell you how confused I am. That's definitely doable.Aaron: Maybe you should start with, like, do you have an initial take at all, or are you really 50/50?Sarah: I mean, from very briefly reading the Wikipedia, it liked doesn't sound true to me.Max: Oh, hell yeah!Aaron: No. Okay, podcast over.Max: Way over the Wikipedia just to see what says. Did you read actual Wikipedia?Sarah: Wikipedia? Yeah, it says “moral realism (also, ethical realism) is the position that ethical Sentences express propositions that refer to objective features of the world. That is, features independent of subjective opinion.”Aaron: Yeah, facts.Max: Good summary from Wikipedia.Sarah: Fake.Max: My job is going to be easy.Sarah: Then, but I'm totally open to be persuaded.Aaron: Okay. The first thing is that I recognize that it sounds fake, it sounds very sus, but then it actually surprisingly checks out. So I just want to get that on the table.Sarah: Okay, what about if each of you do, like, a little opening spiel about why you think you're right, and then you can yell at each other about it afterwards.Aaron: Yeah. Max, do you want to go first or second?Max: I'll go second.Sarah: Okay.Aaron: Well, the first thing is that. The thing I always say is that I simply deferred to Sharon Hewitt Rawlette, who was on the 80,000 Hours Pod

12-28
01:08:17

#9: Sarah Woodhouse on discovering AI x-risk, Twitter, and more

Note: I can’t seem to edit or remove the “transcript” tab. I recommend you ignore that and just look at the much higher quality, slightly cleaned up one below. Most importantly, follow Sarah on Twitter! Summary (Written by chatGPT, as you can probably tell)In this episode of Pigeon Hour host Aaron delves deep into the world of AI safety with his guest, Sarah Woodhouse. Sarah shares her unexpected journey from fearing job automation to becoming a recognized voice on AI safety Twitter. Her story starts with a simple Google search that led her down a rabbit hole of existential dread and unexpected fame on social media. As she narrates her path from lurker to influencer, Sarah reflects on the quirky dynamics of the AI safety community, her own existential crisis, and the serendipitous tweet that resonated with thousands.Aaron and Sarah’s conversation takes unexpected turns, discussing everything from the peculiarities of EA rationalists to the surprisingly serious topic of shrimp welfare. They also explore the nuances of AI doom probabilities, the social dynamics of tech Twitter, and Sarah’s unexpected viral fame as a tween. This episode is a rollercoaster of insights and anecdotes, perfect for anyone interested in the intersection of technology, society, and the unpredictable journey of internet fame.Topics discussedDiscussion on AI Safety and Personal Journeys:* Aaron and Sarah discuss her path to AI safety, triggered by concerns about job automation and the realization that AI could potentially replace her work.* Sarah's deep dive into AI safety started with a simple Google search, leading her to Geoffrey Hinton's alarming statements, and eventually to a broader exploration without finding reassuring consensus.* Sarah's Twitter engagement began with lurking, later evolving into active participation and gaining an audience, especially after a relatable tweet thread about an existential crisis.* Aaron remarks on the rarity of people like Sarah, who follow the AI safety rabbit hole to its depths, considering its obvious implications for various industries.AI Safety and Public Perception:* Sarah discusses her surprise at discovering the AI safety conversation happening mostly in niche circles, often with a tongue-in-cheek attitude that could seem dismissive of the serious implications of AI risks.* The discussion touches on the paradox of AI safety: it’s a critically important topic, yet it often remains confined within certain intellectual circles, leading to a lack of broader public engagement and awareness.Cultural Differences and Personal Interests:* The conversation shifts to cultural differences between the UK and the US, particularly in terms of sincerity and communication styles.* Personal interests, such as theater and musicals (like "Glee"), are also discussed, revealing Sarah's background and hobbies.Effective Altruism (EA) and Rationalist Communities:* Sarah points out certain quirks of the EA and rationalist communities, such as their penchant for detailed analysis, hedging statements, and the use of probabilities in discussions.* The debate around the use of "P(Doom)" (probability of doom) in AI safety discussions is critiqued, highlighting how it can be both a serious analytical tool and a potentially alienating jargon for outsiders.Shrimp Welfare and Ethical Considerations:* A detailed discussion on shrimp welfare as an ethical consideration in effective altruism unfolds, examining the moral implications and effectiveness of focusing on animal welfare at a large scale.* Aaron defends his position on prioritizing shrimp welfare in charitable giving, based on the principles of importance, tractability, and neglectedness.Personal Decision-Making in Charitable Giving:* Strategies for personal charitable giving are explored, including setting a donation cutoff point to balance moral obligations with personal needs and aspirations.TranscriptAARON: Whatever you want. Okay. Yeah, I feel like you said this on Twitter. The obvious thing is, how did you learn about AI safety? But maybe you’ve already covered that. That’s boring. First of all, do you want to talk about that? Because we don’t have to.SARAH: I don’t mind talking about that.AARON: But it’s sort of your call, so whatever. I don’t know. Maybe briefly, and then we can branch out?SARAH: I have a preference for people asking me things and me answering them rather than me setting the agenda. So don’t ever feel bad about just asking me stuff because I prefer that.AARON: Okay, cool. But also, it feels like the kind of thing where, of course, we have AI. Everyone already knows that this is just like the voice version of these four tweets or whatever. But regardless. Yes. So, Sarah, as Pigeon Hour guest, what was your path through life to AI safety Twitter?SARAH: Well, I realized that a chatbot could very easily do my job and that my employers either hadn’t noticed this or they had noticed, but they were just being polite about it and they didn’t want to fire me because they’re too nice. And I was like, I should find out what AI development is going to be like over the next few years so that I know if I should go and get good at some other stuff.SARAH: I just had a little innocent Google. And then within a few clicks, I’d completely doom pilled myself. I was like, we’re all going to die. I think I found Geoffrey Hinton because he was on the news at the time, because he just quit his job at Google. And he was there saying things that sounded very uncertain, very alarming. And I was like, well, he’s probably the pessimist, but I’m sure that there are loads of optimists to counteract that because that’s how it usually goes. You find a doomer and then you find a bunch of more moderate people, and then there’s some consensus in the middle that everything’s basically fine.SARAH: I was like, if I just keep looking, I’ll find the consensus because it’s there. I’m sure it’s there. So I just kept looking and looking for it. I looked for it for weeks. I just didn’t find it. And then I was like, nobody knows what’s going on. This seems really concerning. So then I started lurking on Twitter, and then I got familiar with all the different accounts, whatever. And then at some point, I was like, I’m going to start contributing to this conversation, but I didn’t think that anybody would talk back to me. And then at some point, they started talking back to me and I was like, this is kind of weird.SARAH: And then at some point, I was having an existential crisis and I had a couple of glasses of wine or something, and I just decided to type this big, long thread. And then I went to bed. I woke up the next morning slightly grouchy and hungover. I checked my phone and there were all these people messaging me and all these people replying to my thread being like, this is so relatable. This really resonated with me. And I was like, what is going on?AARON: You were there on Twitter before that thread right? I’m pretty sure I was following you.SARAH: I think, yeah, I was there before, but no one ever really gave me any attention prior to that. I think I had a couple of tweets that blew up before that, but not to the same extent. And then after that, I think I was like, okay, so now I have an audience. When I say an audience, like, obviously a small one, but more of an audience than I’ve ever had before in my life. And I was like, how far can I take this?SARAH: I was a bit like, people obviously started following me because I’m freFreaking out about AI, but if I post an outfit, what’s going to happen? How far can I push this posting, these fit checks? I started posting random stuff about things that were completely unrelated. I was like, oh, people are kind of here for this, too. Okay, this is weird. So now I’m just milking it for all its worth, and I really don’t know why anybody’s listening to me. I’m basically very confused about the whole thing.AARON: I mean, I think it’s kind of weird from your perspective, or it’s weird in general because there aren’t that many people who just do that extremely logical thing at the beginning. I don’t know, maybe it’s not obvious to people in every industry or whatever that AI is potentially a big deal, but there’s lots of truckers or whatever. Maybe they’re not the best demographic or the most conducive demographic, like, getting on Twitter or whatever, but there’s other jobs that it would make sense to look into that. It’s kind of weird to me that only you followed the rabbit hole all the way down.SARAH: I know! This is what I…Because it’s not that hard to complete the circle. It probably took me like a day, it took me like an afternoon to get from, I’m worried about job automation to I should stop saving for retirement. It didn’t take me that long. Do you know what I mean? No one ever looks. I literally don’t get it. I was talking to some people. I was talking to one of my coworkers about this the other day, and I think I came up in conversation. She was like, yeah, I’m a bit worried about AI because I heard on the radio that taxi drivers might be out of a job. That’s bad. And I was like, yeah, that is bad. But do you know what else? She was like, what are the AI companies up to that we don’t know about? And I was like, I mean, you can go on their website. You can just go on their website and read about how they think that their technology is an extinction risk. It’s not like they’re hiding. It’s literally just on there and no one ever looks. It’s just crazy.AARON: Yeah. Honestly, I don’t even know if I was in your situation, if I would have done that. It’s like, in some sense, I am surprised. It’s very few people maybe like one, but at another level, it’s more rationality than most humans have or something. Yeah. You regret going down that rabbit hole?SARAH: Yeah, kind of. Although I’m enjoying the Twitter thing and it’s kind of fun, and it turns out there’s endless comedic material that you can get out of impending doom. The whole thing i

11-15
01:14:47

#8: Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all

* Follow ⁠Max on Twitter⁠* And read his ⁠blog⁠* Listen here or on Spotify or Apple Podcasts * RIP Google Podcasts 🪦🪦🪦SummaryIn this philosophical and reflective episode, hosts Aaron and Max engage in a profound debate over the nature of consciousness, moral realism, and subjective experience. Max, a skeptic of moral realism, challenges Aaron on the objective moral distinction between worlds with varying levels of suffering. They ponder the hard problem of consciousness, discussing the possibility of philosophical zombies and whether computations could account for consciousness. As they delve into the implications of AI on moral frameworks, their conversation extends to the origins of normativity and the nonexistence of free will.The tone shifts as they discuss practical advice for running an Effective Altruism group, emphasizing the importance of co-organizers and the balance between being hospitable and maintaining normalcy. They exchange views on the potential risks and benefits of being open in community building and the value of transparency and honest feedback.Transitioning to lighter topics, Max and Aaron share their experiences with social media, the impact of Twitter on communication, and the humorous side of office gossip. They also touch on the role of anonymity in online discussions, pondering its significance against the backdrop of the Effective Altruism community.As the episode draws to a close, they explore the consequences of public online behavior for employment and personal life, sharing anecdotes and contemplating the broader implications of engaging in sensitive discourses. Despite their digressions into various topics, the duo manages to weave a coherent narrative of their musings, leaving listeners with much to reflect upon.TranscriptAARON: Without any ado whatsoever. Max Alexander and I discuss a bunch of philosophy things and more.MAX: I don't think moral realism is true or something.AARON: Okay, yeah, we can debate this.MAX: That's actually an issue then, because if it's just the case that utilitarianism and this an axiology, which is true or something, whether or not I'm bothered by or would make certain traits personally doesn't actually matter. But if you had the godlike AI or like, I need to give it my axiological system or something, and there's not an objective one, then this becomes more of a problem that you keep running into these issues or something.AARON: Okay, yeah, let's debate. Because you think I'm really wrong about this, and I think you're wrong, but I think your position is more plausible than you think. My position is probably. I'm at like 70%. Some version of moral realism is true. And I think you're at, like, what? Tell me. Like, I don't know, 90 or something.MAX: I was going to probably 99% or something. I've yet to hear a thing that's plausible or something here.AARON: Okay, well, here, let's figure it out once and for all. So you can press a button that doesn't do Nick. The only thing that happens is that it creates somebody in the world who's experiencing bad pain. There's no other effect in the world. And then you have to order these two worlds. There's no normativity involved. You only have to order them according to how good they are. This is my intuition pump. This isn't like a formal argument. This is my intuition pump that says, okay, the one without that suffering person and no other changes. Subjectively, not subjectively. There's a fact of the matter as to which one is better is, like, not. I mean, I feel like, morally better and better here just are synonyms. All things considered. Better, morally better, whatever. Do you have a response, or do you just want to say, like, no, you're a formal argument.MAX: What makes this fact of the matter the case or something like that?AARON: Okay, I need to get into my headspace where I've done this or had this debate before. I do know. I'll defer to Sharon Roulette not too long ago, like ADK podcast guest who basically made the case for hedonic moral realism and hedonic value being the one thing that intrinsically matters and a moral realist view based on that. And I basically disagree with her. Okay. It's like settling in now. Yeah. So it is just the fact of the matter that pleasure is moral is good. And if you say that's not true, then you're wrong and pain is bad. And if you say that that's not true, you're just wrong. That's kind of the argument. That's it. And then I can build on top of it. Where do you get ordering of the world from? But that's the core of the argument here.MAX: Yeah. I think you need an explanation for why this is the fact of the matter or something.AARON: Okay. I mean, do I need an explanation for why one equals one or something like that? Do you need an explanation?MAX: Yes, I think yes. Really? Because we take this to be the case or something, but the symbols one plus one equals two or something is like by itself not true or something. It's like just a bunch of lines, really, or something. Like there's all these axioms and things we build on with the mathematical system and you could do other ones. There are like a bunch of other systems.AARON: I guess if you're a true epistemological nihilist and you think there are no statements that are true, then I'm probably not going to convince you. Is that the case for you?MAX: I don't think it's the case that there aren't things that are true or something.AARON: Do you think there's anything that is true? Can you give me an example? Is there a bed behind you?MAX: I'll say yes, but that's probably couched. You could probably, I think, represent the universe or something as a bunch of a matrix of atom positions or subatomic particle positions and these things, and maybe the rules that govern the equations that govern how they interact or something.AARON: Yeah, I agree.MAX: Make claims that are truth valuable based on that matrix or something. And then you could be like, we can then draw fuzzy concepts around certain things in this matrix and then say more true and true things or whatever.AARON: So I think our disagreement here is that maybe, I don't know if it's a disagreement, but the hard problem of consciousness introduces the fact that that description of the world is just not complete. You have subjective experience also. Are you a phenomenal realist?MAX: What's the definition of that again?AARON: So you think qualia is like a real legit thing?MAX: Is qualia just experience or something?AARON: I'm sorry, I feel like I'm just depending, assuming that, you know, every single term that has ever been used in philosophy. I feel like the thing that I want to say is qualia is like. Yeah, I'll say it's like subjective experience, basically. But then people will say, like, oh, qualia exists, but not in the sense that people normally think. And. But I want to use the strong version of qualia that people argue about. It's real. It's, like, genuine. There's no illusions going on. There's no such thing as functional qualia. If there's functional pain, it is like a different thing than what people mean when they say pain. Most people mean when they say pace and pain. Most of the time, there's, like, a real, genuine, legit subjective experience going on. Do you think this thing is real?MAX: I would say yes, but it does seem like what I would say subjective experiences or something is like a type of computation or something.AARON: I actually lean towards functionalism. Are you familiar with functionalism and theory of philosophy of mind or whatever?MAX: Yeah. Wouldn't that just be the. I think that's what I said. Right. It's just computations or whatever.AARON: So I'm actually not super sure about this. So I apologize to the philosophical community if I'm getting this wrong, but my sense is that when people say functionalism, sometimes they mean that, as in not exactly empirical, but sort of empirical fact of the world is that if you have some computations, you get Qualia, and other people mean that they are just identical. They are just the same thing. There's no, like, oh, you get one and you get the other. They're just the same thing. And I think to say that computations are identical just mean the same thing as qualia is just not true, because it's at least conceivable that. Tell me if you disagree with this. I claim that it's at least conceivable that you have computers that do some sort of computation that you hypothesize might be conscious, but they are not, in fact, conscious. And I think this is conceivable.MAX: Yeah, I agree. Though I would say when it is the case, something, it doesn't seem that means there's something besides just computations going on, or, like, the specific type of computations, like, really, what you said there is. It's conceivable you could have computations that.AARON: Look like we want. Okay, yeah, sorry. You're right. Actually, what I mean is that. Sorry, not what I mean, but what I should have said is that it is conceivable that functionalism is false. Meaning that you can get. Meaning that you can have two sets of systems doing computations, and one of them has qualia and the other one does not. One of them is conscious, the other one is not. Do you think this is conceivable?MAX: Well, do you mean like identical computations or something? I think that'd be the necessary thing or something, because my computer is doing computations right now, but I don't think it's conscious. And I'm doing computations right now, and I do think I'm conscious.AARON: So if you think that you could run your brain's program on an arbitrarily powerful large computer, do you think it's conceivable that that hypothetical computer would not have conscious experience?MAX: I do, but I think this is like the real question. Or the reason I would say it's conceivable that's not having conscious experience is because I would think your simulation just isn't doing the right sort of thing. You think it is, but it's not. For whatever reason, carbon and a

11-06
01:10:41

#7: Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can

* Listen on Spotify or Apple Podcasts* Be sure to check out and follow Holly’s Substack and org Pause AI. Blurb and summary from ClongBlurbHolly and Aaron had a wide-ranging discussion touching on effective altruism, AI alignment, genetic conflict, wild animal welfare, and the importance of public advocacy in the AI safety space. Holly spoke about her background in evolutionary biology and how she became involved in effective altruism. She discussed her reservations around wild animal welfare and her perspective on the challenges of AI alignment. They talked about the value of public opinion polls, the psychology of AI researchers, and whether certain AI labs like OpenAI might be net positive actors. Holly argued for the strategic importance of public advocacy and pushing the Overton window within EA on AI safety issues.Detailed summary* Holly's background - PhD in evolutionary biology, got into EA through New Atheism and looking for community with positive values, did EA organizing at Harvard* Worked at Rethink Priorities on wild animal welfare but had reservations about imposing values on animals and whether we're at the right margin yet* Got inspired by FLI letter to focus more on AI safety advocacy and importance of public opinion* Discussed genetic conflict and challenges of alignment even with "closest" agents* Talked about the value of public opinion polls and influencing politicians* Discussed the psychology and motives of AI researchers* Disagreed a bit on whether certain labs like OpenAI might be net positive actors* Holly argued for importance of public advocacy in AI safety, thinks we have power to shift Overton window* Talked about the dynamics between different AI researchers and competition for status* Discussed how rationalists often dismiss advocacy and politics* Holly thinks advocacy is neglected and can push the Overton window even within EA* Also discussed Holly's evolutionary biology takes, memetic drive, gradient descent vs. natural selectionFull transcript (very imperfect)AARONYou're an AI pause, Advocate. Can you remind me of your shtick before that? Did you have an EA career or something?HOLLYYeah, before that I was an academic. I got into EA when I was doing my PhD in evolutionary biology, and I had been into New Atheism before that. I had done a lot of organizing for that in college. And while the enlightenment stuff and what I think is the truth about there not being a God was very important to me, but I didn't like the lack of positive values. Half the people there were sort of people like me who are looking for community after leaving their religion that they grew up in. And sometimes as many as half of the people there were just looking for a way for it to be okay for them to upset people and take away stuff that was important to them. And I didn't love that. I didn't love organizing a space for that. And when I got to my first year at Harvard, harvard Effective Altruism was advertising for its fellowship, which became the Elite Fellowship eventually. And I was like, wow, this is like, everything I want. And it has this positive organizing value around doing good. And so I was totally made for it. And pretty much immediately I did that fellowship, even though it was for undergrad. I did that fellowship, and I was immediately doing a lot of grad school organizing, and I did that for, like, six more years. And yeah, by the time I got to the end of grad school, I realized I was very sick in my fifth year, and I realized the stuff I kept doing was EA organizing, and I did not want to keep doing work. And that was pretty clear. I thought, oh, because I'm really into my academic area, I'll do that, but I'll also have a component of doing good. I took giving what we can in the middle of grad school, and I thought, I actually just enjoy doing this more, so why would I do anything else? Then after grad school, I started applying for EA jobs, and pretty soon I got a job at Rethink Priorities, and they suggested that I work on wild animal welfare. And I have to say, from the beginning, it was a little bit like I don't know, I'd always had very mixed feelings about wild animal welfare as a cause area. How much do they assume the audience knows about EA?AARONA lot, I guess. I think as of right now, it's a pretty hardcore dozen people. Also. Wait, what year is any of this approximately?HOLLYSo I graduated in 2020.AARONOkay.HOLLYYeah. And then I was like, really?AARONOkay, this is not extremely distant history. Sometimes people are like, oh, yeah, like the OG days, like four or something. I'm like, oh, my God.HOLLYOh, yeah, no, I wish I had been in these circles then, but no, it wasn't until like, 2014 that I really got inducted. Yeah, which now feels old because everybody's so young. But yeah, in 2020, I finished my PhD, and I got this awesome remote job at Rethink Priorities during the Pandemic, which was great, but I was working on wild animal welfare, which I'd always had some. So wild animal welfare, just for anyone who's not familiar, is like looking at the state of the natural world and seeing if there's a way that usually the hedonic so, like, feeling pleasure, not pain sort of welfare of animals can be maximized. So that's in contrast to a lot of other ways of looking at the natural world, like conservation, which are more about preserving a state of the world the way preserving, maybe ecosystem balance, something like that. Preserving species diversity. The priority with wild animal welfare is the effect of welfare, like how it feels to be the animals. So it is very understudied, but I had a lot of reservations about it because I'm nervous about maximizing our values too hard onto animals or imposing them on other species.AARONOkay, that's interesting, just because we're so far away from the margin of I'm like a very pro wild animal animal welfare pilled person.HOLLYI'm definitely pro in theory.AARONHow many other people it's like you and formerly you and six other people or whatever seems like we're quite far away from the margin at which we're over optimizing in terms of giving heroin to all the sheep or I don't know, the bugs and stuff.HOLLYBut it's true the field is moving in more my direction and I think it's just because they're hiring more biologists and we tend to think this way or have more of this perspective. But I'm a big fan of Brian domestics work. But stuff like finding out which species have the most capacity for welfare I think is already sort of the wrong scale. I think a lot will just depend on how much. What are the conditions for that species?AARONYeah, no, there's like seven from the.HOLLYCoarseness and the abstraction, but also there's a lot of you don't want anybody to actually do stuff like that and it would be more possible to do the more simple sounding stuff. My work there just was consisted of being a huge downer. I respect that. I did do some work that I'm proud of. I have a whole sequence on EA forum about how we could reduce the use of rodenticide, which I think was the single most promising intervention that we came up with in the time that I was there. I mean, I didn't come up with it, but that we narrowed down. And even that just doesn't affect that many animals directly. It's really more about the impact is from what you think you'll get with moral circle expansion or setting precedents for the treatment of non human animals or wild animals, or semi wild animals, maybe like being able to be expanded into wild animals. And so it all felt not quite up to EA standards of impact. And I felt kind of uncomfortable trying to make this thing happen in EA when I wasn't sure that my tentative conclusion on wild animal welfare, after working on it and thinking about it a lot for three years, was that we're sort of waiting for transformative technology that's not here yet in order to be able to do the kinds of interventions that we want. And there are going to be other issues with the transformative technology that we have to deal with first.AARONYeah, no, I've been thinking not that seriously or in any formal way, just like once in a while I just have a thought like oh, I wonder how the field of, like, I guess wild animal sorry, not wild animal. Just like animal welfare in general and including wild animal welfare might make use of AI above and beyond. I feel like there's like a simple take which is probably mostly true, which is like, oh, I mean the phrase that everybody loves to say is make AI go well or whatever that but that's basically true. Probably you make aligned AI. I know that's like a very oversimplification and then you can have a bunch of wealth or whatever to do whatever you want. I feel like that's kind of like the standard line, but do you have any takes on, I don't know, maybe in the next couple of years or anything more specifically beyond just general purpose AI alignment, for lack of a better term, how animal welfare might put to use transformative AI.HOLLYMy last work at Rethink Priorities was like looking a sort of zoomed out look at the field and where it should go. And so we're apparently going to do a public version, but I don't know if that's going to happen. It's been a while now since I was expecting to get a call about it. But yeah, I'm trying to think of what can I scrape from that?AARONAs much as you can, don't reveal any classified information. But what was the general thing that this was about?HOLLYThere are things that I think so I sort of broke it down into a couple of categories. There's like things that we could do in a world where we don't get AGI for a long time, but we get just transformative AI. Short of that, it's just able to do a lot of parallel tasks. And I think we could do a lot we could get a lot of what we want for wild animals by doing a ton of surveillance and having the ability to make incredibly precise changes to the ecosystem. Having surveillance so we know when something is like, and the capacity to do

10-17
01:37:43

#6 Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense)

Listen on: * Spotify* Apple Podcasts* Google PodcastsNote: the core discussion on ethics begins at 7:58 and moves into philosophy of language at ~1:12:19Daniel’s stuff:* AI X-risk podcast * The Filan Cabined podcast* Personal website and blogBlurb and bulleted summary from ClongThis wide-ranging conversation between Daniel and Aaron touches on movies, business drama, philosophy of language, ethics and legal theory. The two debate major ethical concepts like utilitarianism and moral realism. Thought experiments around rational beings choosing to undergo suffering feature prominently. meandering tangents explore the semantics of names and references.* Aaron asserts that total utilitarianism does not imply that any amount of suffering can be morally justified by creating more happiness. His argument is that the affirmative case for this offsetting ability has not been clearly made.* He proposes a thought experiment - if offered to experience the suffering of all factory farmed animals in exchange for unlimited happiness, even a perfectly rational being would refuse. This indicates there are some levels of suffering not offsettable.* Aaron links this to experiences like hunger where you realize suffering can be worse than you appreciate normally. This causes his intuition some suffering can't be outweighed.* Daniel disagrees, believing with the right probabilities and magnitudes of suffering versus happiness, rational beings would take that gamble.* For example, Daniel thinks the atomic bombing of Japan could be offset by reducing more suffering. Aaron is less sure given the pain inflicted.* Daniel also proposes offsets for animal farming, but Aaron doesn't think factory farming harm is offsettable by any amount of enjoyment of meat.* They discuss definitions of rationality and whether evolution pressures against suicide impact the rationality of not killing oneself.* Aaron ties his argument to siding with what a perfectly rational being would choose to experience, not necessarily what they would prefer.* They debate whether hypothetical aliens pursuing "schmorality" could point to a concept truly analogous to human morality. Aaron believes not.Transcript(Very imperfect)AARONO'how's, it going it's going all right.DANIELYeah, I just so yesterday I saw Barbie and today I saw Oppenheimer, so it's good to oh, cool. That cultural.AARONNice, nice.DANIELDo you have takes? Yeah, I thought it was all right. It was a decent view of Oppenheimer as a person. It was like a how? I don't know. I feel like the public can tend to be taken in by this physicist figures you get this with quotes, right? Like, the guy was just very good at having fun with journalists, and now we get these amazing nuggets of wisdom from Einstein. I don't know. I think that guy was just having good I don't know. The thing that I'm coming away from is I thought I only watched Barbie because it was coming out on the same day as Oppenheimer, right? Like, otherwise it wouldn't have occurred to me to watch it. I was like, yeah, whatever. Barbie is, like, along for the ride, and Oppenheimer is going to be amazing, but in like, maybe Oppenheimer was a bit better than Barbie, but I'm not even sure of that, actually.AARONYeah, I've been seeing people say that on Twitter. I haven't seen either, but I've been seeing several people say that I'm following, say, like, Barbie was exceptional. And also that kind of makes sense because I'm following all these EA people who are probably care more about the subject matter for the latter one. So it's like, I kind of believe that Barbie is, like, aesthetically better or something. That's my take. Right.DANIELGuess. Well, if you haven't seen them, I guess I don't want to spoil them for you. They're trying to do different things aesthetically. Right. Like, I'm not quite sure I'd want to say one is aesthetically better. Probably in some ways, I think Barbie probably has more aesthetic blunders than Oppenheimer does. Okay. But yeah, I don't know if you haven't seen it, I feel like I don't want to spoil it for you.AARONOkay. No, that's fine. This isn't supposed to be like probably isn't the most important the most interesting thing we could be talking about is that the bar?DANIELOh, jeez.AARONOh, no, that's a terrible bar. That was like an overstatement. That would be a very high bar. It would also be, like, kind of paralyzing. I don't know. Actually know what that would be, honestly. Probably some social juicy gossip thing. Not that we necessarily have any.DANIELYeah, I think your interestingness. Yeah, I think I don't have the know, the closest to gossip thing I saw was like, do you see this bit of Carolyn Elson's diaries and letters to SBF that was leaked to the.AARONNo, I don't. Was this like today or recently? How recently?DANIELThis was like a few days ago.AARONI've been seeing her face on Twitter, but I don't actually think I know anything about this. And no, I would not have.DANIELBackground of who she is and stuff.AARONYeah, hold on. Let the audience know that I am on a beach family vacation against my will. Just kidding. Not against my will. And I have to text my sister back. Okay, there we go. I mean, I broadly know the FTX story. I know that she was wait, I'm like literally blanking on the Alameda.DANIELThat's the name of research.AARONOkay. Yeah. So she was CEO, right? Yeah. Or like some sort of like I think I know the basics.DANIELThe like, she was one of the OG Stanford EA people and was around.AARONYeah, that's like a generation. Not an actual generation, like an EA generation. Which is what, like six years or.DANIELLike the I don't know, I've noticed like, in the there's like I feel like there's this gap between pre COVID people and post COVID people. No one left their house. Partly people moved away, but also you were inside for a while and never saw anyone in person. So it felt like, oh, there's like this crop of new people or something. Whereas in previous years, there'd be some number of new people per year and they'd get gradually integrated in. Anyway, all that is to say that, I don't know, I think SBF's side of the legal battle leaked some documents to The New York Times, which were honestly just like her saying, like, oh, I feel very stressed and I don't like my job, and I'm sort of glad that the thing is blown up now. I don't know. It honestly wasn't that salacious. But I think that's, like, the way I get in the loop on gossip like some of the New York Times.AARONAnd I eventually I love how it's funny that this particular piece of gossip is, like, running through the most famous and prestigious news organization in the world. Or, like, one of them or something. Yeah. Instead of just being like, oh, yeah, these two people are dating, or whatever. Anyway, okay, I will maybe check that out.DANIELYeah, I mean, honestly, it's not even that interesting.AARONThe whole thing is pretty I am pretty. This is maybe bad, but I can't wait to watch the Michael Lewis documentary, pseudo documentary or whatever.DANIELYeah, it'll be good to read the book. Yeah, it's very surreal. I don't know. I was watching Oppenheimer. Right. And I have to admit, part of what I'm thinking is be if humanity survives, there's going to be this style movie about open AI, presumably, right? And I'm like, oh, man, it'll be amazing to see my friend group depicted on film. But that is going to happen. It's just going to be about FTX and about how they're all criminals. So that's not great.AARONYeah, actually, everybody dunks on crypto now, and it's like low status now or whatever. I still think it's really cool. I never had more than maybe $2,000 or whatever, which is not a trivial I mean, it's not a large amount of my money either, but it's not like, nothing. But I don't know, if it wasn't for all the cultural baggage, I feel like I would be a crypto bro or I would be predisposed to being a crypto bro or something.DANIELYeah. I should say I was like joking about the greedy crypto people who want their money to not be stolen. I currently have a Monero sticker on the back of my a big I don't know, I'm a fan of the crypto space. It seems cool. Yeah. I guess especially the bit that is less about running weird scams. The bit that's running weird scams I'm less of a fan of.AARONYeah. Yes. I'm also anti scam. Right, thank you. Okay, so I think that thing that we were talking about last time we talked, which is like the thing I think we actually both know stuff about instead of just like, repeating New York Times articles is my nuanced ethics takes and why you think about talk about that and then we can just also branch off from there.DANIELYeah, we can talk about that.AARONMaybe see where that did. I luckily I have a split screen up, so I can pull up things. Maybe this is kind of like egotistical or something to center my particular view, but you've definitely given me some of the better pushback or whatever that I haven't gotten that much feedback of any kind, I guess, but it's still interesting to hear your take. So basically my ethical position or the thing that I think is true is that which I think is not the default view. I think most people think this is wrong is that total utilitarianism does not imply that for some amount of suffering that could be created there exists some other extremely large arbitrarily, large amount of happiness that could also be created which would morally justify the former. Basically.DANIELSo you think that even under total utilitarianism there can be big amounts of suffering such that there's no way to morally tip the calculus. However much pleasure you can create, it's just not going to outweigh the fact that you inflicted that much suffering on some people.AARONYeah, and I'd highlight the word inflicted if something's already there and you can't do anything about it, that's kind of neither here nor there as it pertains to your actions or something. So it's really about you increasing, you creating suffering that

08-07
02:05:23

#5: Nathan Barnard (again!) on why general intelligence is basically fake

Very imperfect transcript: bit.ly/3QhFgEJSummary from Clong: The discussion centers around the concept of a unitary general intelligence or cognitive ability. Whether this exists as a real and distinct thing. Nathan argues against it, citing evidence from cognitive science about highly specialized and localized brain functions that can be damaged independently. Losing linguistic ability does not harm spatial reasoning ability. He also cites evidence from AI, like systems excelling at specific tasks without general competency, and tasks easy for AI but hard for humans. This suggests human cognition isn’t defined by some unitary general ability. Aaron is more open to the idea, appealing to an intuitive sense of a qualitative difference between human and animal cognition - using symbolic reasoning in new domains. But he acknowledges the concept is fuzzy. They discuss whether language necessitates this general ability in humans, or is just associated. Nathan leans toward specialized language modules in the brain. They debate whether strong future AI systems could learn complex motor skills just from textual descriptions, without analogous motor control data. Nathan is highly skeptical. Aaron makes an analogy to the universe arising from simple physical laws. Nathan finds this irrelevant to the debate. Overall, Nathan seems to push Aaron towards a more skeptical view of a unitary general cognitive ability as a scientifically coherent concept. But Aaron retains some sympathy for related intuitions about human vs animal cognition. Get full access to Aaron's Blog at www.aaronbergman.net/subscribe

07-28
01:09:34

#4 Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more

Summary (by Claude.ai)This informal podcast covers a wide-ranging conversation between two speakers aligned in the effective altruism (EA) community. They have a similar background coming to EA from interests in philosophy, rationality, and reducing suffering. The main topic explored is reducing s-risks, or risks of extreme suffering in the future.Winston works for the Center for Reducing Suffering (CRS), focused on spreading concern for suffering, prioritizing interventions, and specifically reducing s-risks. He outlines CRS's focus on research and writing to build a moral philosophy foundation for reducing suffering. Aaron is skeptical s-risk reduction is tractable currently, seeing the research as abstract without a clear theory of change.They discuss how CRS and a similar group CLR are trying to influence AI alignment and digital sentience to reduce potential future s-risks. But Aaron worries about identifying and affecting the "digital neural correlates of suffering." Winston responds these efforts aim to have a positive impact even if unlikely to succeed, and there are potential lock-in scenarios that could be influenced.Aaron explains his hesitancy to donate based on tractability concerns. He outlines his EA independent research, which includes an archive project around nuclear war. More broadly, the two find they largely ethically agree, including on a suffering-focused ethics and "lexical negative utilitarianism within total utilitarianism.Some disagreements arise around the nature of consciousness, with Aaron arguing rejecting qualia implies nihilism while Winston disagrees. They also diverge on moral realism, with Aaron defending it and Winston leaning anti-realist.As they wrap up the wide-ranging conversation, they joke about convincing each other and make predictions on podcast listens. They thank each other for the thought-provoking discussion, aligned in ethics but with some disagreements on consciousness and metaethics. The conversation provides an insider perspective on efforts to reduce s-risks through research and outreach.EA Archive: https://forum.effectivealtruism.org/posts/DndmvDGStD3gTfhXk Get full access to Aaron's Blog at www.aaronbergman.net/subscribe

07-17
01:11:48

#3: Nathan Barnard on how financial regulation can inform AI regulation

Summary/specific topics:- Stress Tests and AI Regulation: Nathan elaborates on the concept of stress tests conducted by central banks. These tests assess the resilience of banks to severe economic downturns and the potential for a domino effect if one bank fails. They believe that lessons from this process can be applied to AI regulation. Aaron agrees, but also highlights the need for a proactive approach to AI regulation, as opposed to the reactive measures often seen in banking regulation.- The Role of Central Banks in AI Regulation: Nathan suggests that institutions structured like central banks, staffed with technical experts and independent from government, could be beneficial for AI regulation. They believe such institutions could respond quickly and effectively to crises. However, they acknowledge that this approach may not be effective if AI development leads to rapid, uncontrollable self-improvement.- Compute Governance: The conversation then shifts to compute governance, which Nathan sees as a promising area for AI regulation due to the obviousness of someone using large amounts of compute. They believe that this could provide governments with a control lever over cutting-edge AI labs, similar to how central banks control banking loans and affairs.- AI Regulation and the Role of Public Actors: Nathan acknowledges that the leaders of major AI labs seem sensible and aligned with AI safety principles. However, they argue that regulation and public actors can play a crucial role in creating common knowledge between labs and preventing a race to the bottom. They also discuss the potential benefits and drawbacks of different regulatory approaches.- Financial Regulation as a Model for AI Regulation: Nathan believes that post-crisis financial regulation, such as the Dodd-Frank Act, has generally been effective. They suggest that AI regulation could follow a similar path, especially if AI becomes a significant part of the economy. However, Aaron expresses skepticism about the ability of political processes to produce effective AI regulation.- Regulation Before and After Crises: The speakers agree that pre-crisis regulation has generally been less effective than post-crisis regulation. They discuss the potential for AI regulation to follow a similar pattern, with effective regulation emerging in response to a crisis.- Regulatory Arbitrage: The conversation concludes with a discussion on regulatory arbitrage, where banks shift activities to where it's cheapest to do business. Despite evidence of this behavior, Nathan notes that there was no race to the bottom in terms of regulation during the financial crisis. Get full access to Aaron's Blog at www.aaronbergman.net/subscribe

07-13
50:03

#2 Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one

- Follow Arjun on Twitter: https://twitter.com/panickssery- Read and subscribe to his blog: https://arjunpanickssery.substack.com- A mediocre transcription can be found at https://www.assemblyai.com/playground/transcript/6x5h1mcemt-bff1-40fc-a676-9b59c66985f0🦆 👍 Get full access to Aaron's Blog at www.aaronbergman.net/subscribe

06-30
01:02:09

#1 Laura Duffy solves housing, ethics, and more

A transcript can be found at assemblyai.com/playground/transcript/6y7e7wz28c-30aa-4e83-ba4f-1bddf2e23dad Get full access to Aaron's Blog at www.aaronbergman.net/subscribe

06-17
01:15:20

Aaron's Blog, podcast edition

For All Good's inaugural episode, we talked to Rob Wiblin and Keiran Harris of 80,000 Hours about how and why they produce their show. This episode first appeared on their new feed "80,000 Hours: After Hours" here: https://80000hours.org/podcast/episodes/80k-after-hours-philosophy-of-the-80000-hours-podcast/ Hope you find it interesting! Get full access to Aaron's Blog at www.aaronbergman.net/subscribe

03-04
01:54:29

Aaron Bergman on the Narratives Podcast: EA, career, and more

Hear EA Georgetown member Aaron Bergman's recent interview as a guest on the Narratives Podcast! During the show, host Will Jarvis talks to Aaron about a key way he thinks people go wrong when choosing a career, how society treats children, how bureaucracy works, whether the FDA should have to approve medications, his interest in psychopharmacology, and a whole lot more. Check out Narratives at narrativespodcast.com or search for it wherever you're listening to this episode! Get full access to Aaron's Blog at www.aaronbergman.net/subscribe

01-05
49:34

Introducing All Good

Welcome to All Good, a show by Georgetown Effective Altruism. Get full access to Aaron's Blog at www.aaronbergman.net/subscribe

07-20
00:35

Kids are people too

Update Feb 12, 2023: Automated audio experiment with the Automator app no one usesIntroBack in 2016, I got my first job as a summer camp counselor. It was an outdoor adventure day camp, to which the six figure-making lawyers and consultants of Washington, D.C. sent their straight-A getting, lacrosse-playing children for a taste of the great outdoors. The campers ranged from four years of age to 15, with those nine and up able to choose which among the activities—kayaking, rock climbing, horseback riding, and more—to pursue. I, as a typical sheltered suburbanite (though a competitive rock climber rather than a lacrosse player), had attended the camp myself throughout my childhood. We Junior Counselors, or “JCs,” could request which age-activity combinations to supervise. Though I had to do my fair share of toddler-wrangling, I was granted a few weeks in the Big Leagues: the rock climbing program for kids aged 9-15. Each cohort of about 12 campers had 3 staff members: a JC, a head counselor or “HC” at least 18 years old, and an “instructor” to provide technical expertise. As a 16 year old mere months older than some of my campers, it was a little bizarre. I was an Adult, getting paid (barely) to ensure that the youth in my care remained happy and healthy in the rugged Maryland wilderness. By and large, it was a good camp. What I’m about to say doesn’t negate the overall-positive experience of most campers.LyingStaff lied to campers constantly. We were expected to, and generally obliged. Not about important things, either. In fact, the lies generally concerned utterly trivial matters. Child protection is goodNow, I’m not entirely naive. I understand that placing children of all genders in the care of three young adults literally in the woods, including on one overnight campout per week, is a situation tailor-made for abuse. And the camp, to its credit, took this very seriously. We had training after training and rules and guidelines galore designed to ensure that no staff member would ever be tempted to misuse his or her power and status.This was a good thing. Child abuse is bad, and I’m glad we erred on the side of “careful.” I understood that my relationship with the campers was not to be intimate. Staff shouldn’t, and didn’t, reveal the skeletons in their closet to middle schoolers at rock climbing camp. No playing “truth or dare” around the campfire, either. Understood.Casual dishonestyBut did we really have to lie so much? Take my age. Though I don’t recall being explicitly told “you may not reveal your age to campers,” doing just this was an obvious norm among staff. Like any normal human being, the campers wanted to know some basic biographical information about the people they were spending their summers with. This was particularly true for me, a 16 year old who looked about 14 on a good day, whose size-small “counselor” tank top was held up with safety pins to prevent the neck hole from drooping down to my nipples. My fellow counselors, when inevitably asked this question, would say something like “you have to guess” or “six hundred” or “maybe if we win the song competition I’ll tell you at the end of the week” (they wouldn’t). Nobody said “17” or “31” or “sorry, I can’t tell you that.”Though human memory is fallible and it’s been five years, I’m pretty sure I got tired of telling stupid, pointless lies. When campers asked, I made the completely banal decision to answer their question. Our conversations generally went like this:Camper: How old are you?Me: sixteen.Camper: You look like you’re [insert number between 11 and 15].Me: I know.Camper: Ok.And that was that. My authority remained intact. No one stopped listening to me. Maybe I’m indulging myself, but I think the campers might have been pleasantly surprised at, you know, not being lied to for once. I didn’t only do this for the sake of my campers’ dignity. I really did look younger than some of the campers, and it was in my own best interest that they be aware I was not. My fellow counselors didn’t get mad at me. They didn’t bring it up. They just kept on lying to the kids, and I did not. I can’t say for sure I was the only one who told the truth, but I sure never heard anyone else do the same.It wasn’t just my age. They’d ask which school I went to, and (if my memory serves correctly) I told them. I’m sure they asked some things I really couldn’t or shouldn’t have answered, and then I wouldn’t tell them. Ageism Back in my gung-ho veganism days, I remember thinking the term speciesism was really dumb. Damn right, we should treat different species differently. Unlike with, say, racial identity, there really are important differences—moral and otherwise—between dandelions, lobsters, chickens, and humans. I still think the term is ripe for misinterpretation, but I now see the underlying concept as sound. Speciesism is arbitrary differential treatment, rather than differential treatment justified by real differences in underlying traits. Ageism is the same way. We really should treat young kids differently than adults, but only because they have different underlying traits—not because of their age per se. If we don’t think that kids’ have the cognitive capacity or confidence or something to make big life decisions or competently assess risk, fine.I’m not saying we should let infants sign contracts, and I’m not even saying we should never lie to children. Whatever Kant would have you think, dishonesty is sometimes the lesser of two evils and therefore the right thing to do, but the chronological age of the person you’d be lying to isn’t a good enough justification. The socially acceptable prejudiceI guess great minds think alike, because after writing this subheading I came across the AARP’s article “Workplace Age Discrimination Still Flourishes in America.” Under its own subheading, “Ageism: An accepted bias,” it argues thatageism in the workplace occurs every day across America, and it is tolerated or — even worse — unrecognized for what it truly is: discrimination, plain and simple.“Age discrimination is so pervasive that people don’t even recognize it’s illegal,” asserts Kristin Alden, an attorney specializing in employee rights at the Alden Law Group in Washington, D.C. Ageism is most commonly discussed in the context of employment discrimination, but its social acceptability rings just as true in the context of other, non-workplace incarnations. Reasons whyAnyway, I think there are two fundamental reasons why ageism is so common and acceptable:* Age really does correlate with differences in underlying characteristics like mental aptitude.* All adults were once children, and most adults expect to eventually get old.The first reason seems pretty intuitive; it really can be hard to tell what is justified differential treatment (say, not letting toddlers sign legally-binding agreements) and what is straight-up, arbitrary bias.The second point is more subtle. It’s the “I can’t be racist—I have a black friend!” of ageism. “I can’t be biased against children,” we think, “I was a child myself once!” I think there’s a bit of retributive justice going on, too: if I had to endure a childhood of casual, well-meaning disrespect, why shouldn’t I get to enjoy the status-privileges of adulthood?These narratives are rarely made explicit. I’m sure I’ve been disrespectful to children before without saying such things out loud, or even really thinking about them directly. Nor are they malicious. When my fellow counselors and I were lying to the campers, we thought of it as a big joke. We were teasing them, playing a game the campers never agreed to join.This is why ageism is so pernicious. It’s easy to moralize racism and sexism; there are the oppressors and the oppressed. What happens when we’re all on both sides of the power structure, albeit at different times in our lives? This bias, well-intentioned and hidden in plain sight, doesn’t fit well into a compelling moral narrative. Candy-coated contempt If you only read one link in this post, make it the Atlantic’s “There’s a Better Way to Parent: Less Yelling, Less Praise” (warning: uses up one of your monthly free articles), an interview with the author of Hunt, Gather, Parent: What Ancient Cultures Can Teach Us About the Lost Art of Raising Happy, Helpful Little Humans. Part of it reads:Joe Pinsker: Many American parenting strategies, you estimate, are only about 100 years old, and some of them arose more recently than that. What about American parenting sticks out to you as distinctive and particularly strange?Michaeleen Doucleff: One of the craziest things we do is praise children constantly. When I was first working on the book, I recorded myself to see how frequently I praised my little girl, Rosy, and I noticed that I would exaggeratedly react to even her smallest accomplishments, like drawing a flower or writing a letter, with a comment like “Good job!” or “Wow! What a beautiful flower!”This is insane if you look around the world and throughout human history. Everywhere I went, I don’t know if I ever heard a parent praise a child. Yet these kids are incredibly self-sufficient, confident, and respectful—everything we want praise to do, these kids already have it, without the praise.It’s hard to cut back on praise, because it’s so baked in, but later on, I decided to try. It’s not that there’s no feedback, but it’s much gentler feedback—parents will smile or nod if a child is doing something they want. I started doing that, and Rosy’s behavior really improved. A lot of the attention-seeking behavior went away.Doucleff emphasizes how excessive praise induces attention-seeking behavior, but I’d like to draw attention to its direct, first-order effect: children feel disrespected. BaseballI started playing baseball in first grade, and didn’t stop for nearly a decade. During elementary school, I distinctly remember loathing the hollow, meaningless compliments that coach after coach fed to us players (which my fath

06-08
15:44

Recommend Channels