DiscoverThe Retort AI Podcast
The Retort AI Podcast
Claim Ownership

The Retort AI Podcast

Author: Thomas Krendl Gilbert and Nathan Lambert

Subscribed: 16Played: 93
Share

Description

Distilling the major events and challenges in the world of artificial intelligence and machine learning, from Thomas Krendl Gilbert and Nathan Lambert.
29 Episodes
Reverse
Tom and Nate catch up on many AI policy happenings recently. California's "anti open source" 1047 bill, the senate AI roadmap, Google's search snaifu, OpenAI's normal nonsense, and reader feedback! A bit of a mailbag. Enjoy.00:00 Murky waters in AI policy00:33 The Senate AI Roadmap05:14 The Executive Branch Takes the Lead08:33 California's Senate AI Bill22:22 OpenAI's Two Audiences28:53 The Problem with OpenAI Model Spec39:50 A New World of AI RegulationA bunch of links...Data and society whitepaper: https://static1.squarespace.com/static/66465fcd83d1881b974fe099/t/664b866c9524f174acd7931c/1716225644575/24.05.18+-+AI+Shadow+Report+V4.pdfhttps://senateshadowreport.com/ California billhttps://www.hyperdimensional.co/p/california-senate-passes-sb-1047 https://legiscan.com/CA/text/SB1047/id/2999979 Data wallshttps://www.interconnects.ai/p/the-data-wall Interconnects Merchhttps://interconnects.myshopify.com/
Tom and Nate discuss two major OpenAI happenings in the last week. The popular one, the chat assistant, and what it reveals about OpenAI's worldview. We pair this with discussion of OpenAI's new Model Spec, which details their RLHF goals: https://cdn.openai.com/spec/model-spec-2024-05-08.htmlThis is a monumental week for AI. The product transition is completed, we can't just be researchers anymore.00:00 Guess the Donkey Kong Character00:50 OpenAI's New AI Girlfriend07:08 OpenAI's Business Model and Responsible AI08:45 GPT-2 Chatbot Thing and OpenAI's Weirdness12:48 OpenAI and the Mystery Box19:10 The Blurring Boundaries of Intimacy and Technology22:05 Rousseau's Discourse on Inequality and the Impact of Technology26:16 OpenAI's Model Spec and Its Objectives30:10 The Unintelligibility of "Benefiting Humanity"37:01 The Chain of Command and the Paradox of AI Love45:46 The Form and Content of OpenAI's Model Spec48:51 The Future of AI and Societal Disruptions
Tom and Nate discuss the shifting power landscape in AI. They try to discern what is special about Silicon Valley's grasp on the ecosystem and what other types of power (e.g. those in New York and Washington DC) will do to mobilize their influence. Here's the one Tweet we referenced on the FAccT community: https://twitter.com/KLdivergence/status/165384349793226752000:00: Introduction and Cryptozoologists02:00: DC and the National AI Research Resource (NAIR)05:34: The Three Legs of the AI World: Silicon Valley, New York, and DC11:00: The AI Safety vs. Ethics Debate13:42: The Rise of the Third Entity: The Government's Role in AI19:42: New York's Influence and the Power of Narrative29:36: Silicon Valley's Insularity and the Need for Regulation36:50: The Amazon Antitrust Paradox and the Shifting Landscape48:20: The Energy Conundrum and the Need for Policy Solutions56:34: Conclusion: Finding Common Ground and Building a Better Future for AI
Tom and Nate cover the state of the industry after Llama 3. Is Zuck the best storyteller in AI? Is he the best CEO? Are CEOs doing anything other than buying compute? We cover what it means to be successful at the highest level this week. Links:Dwarkesh interview with Zuck https://www.dwarkeshpatel.com/p/mark-zuckerberg Capuchin monkey https://en.wikipedia.org/wiki/Capuchin_monkey 00:00 Introductions & advice from a wolf00:45 Llama 307:15 Resources and investment required for large language models14:10 What it means to be a leader in the rapidly evolving AI landscape22:07 How much of AI progress is driven by stories vs resources29:41 Critiquing the concept of Artificial General Intelligence (AGI)38:10 Misappropriation of the term AGI by tech leaders42:09 The future of open models and AI development
Tom and Nate catch up after a few weeks off the pod. We discuss what it means for the pace and size of open models to get bigger and bigger. In some ways, this disillusionment is a great way to zoom our into the big picture. These models are coming. These models are getting cheaper. We need to think about risks and infrastructure more than open vs. closed.00:00 Introduction 01:16 Recent developments in open model releases 04:21 Tom's experience viewing the total solar eclipse09:38 The Three-Body Problem book and Netflix14:06 The Gartner Hype Cycle22:51 Infrastructure constraints on scaling AI28:47 Metaphors and narratives around AI risk34:43 Rethinking AI risk as public health problems37:37 The "one-way door" nature of releasing open model weights44:04 The relationship between the AI ecosystem and the models48:24 Wrapping up the discussion in the "trough of disillusionment"We've got some links for you again:- Gartner hype cycle https://en.wikipedia.org/wiki/Gartner_hype_cycle - MSFT Supercomputer https://www.theinformation.com/articles/microsoft-and-openai-plot-100-billion-stargate-ai-supercomputer - Safety is about systems https://www.aisnakeoil.com/p/ai-safety-is-not-a-model-property - Earth day history https://www.earthday.org/history/ - For our loyal listeners http://tudorsbiscuitworld.com/
Tom and Nate catch up on the ridiculous of Nvidia GTC, the lack of trust in AI, and some important taxonomies and politics around governing AI. Safety institutes, reward model benchmarks, Nathan's bad joke delivery, and all the normal good stuff in this episode! Yes, we're also sick of the Taylor Swift jokes, but they get the clicks.The Taylor moment: https://twitter.com/DrJimFan/status/176981794893007293000:00 Intros and discussion on NVIDIA's influence in AI and the Bay Area09:08 Mustafa Suleyman's new role and discussion on AI safety11:31 The shift from performance to trust in AI evaluation17:31 The role of government agencies in AI policy and regulation24:07 The role of accreditation in establishing legitimacy and trust32:11 Grok's open source release and its impact on the AI community39:34 Responsibility and accountability in AI and social media platforms
Tom and Nate sit down to discuss Claude 3 and some updates on what it means to be open. Not surprisingly, we get into debating some different views. We cover Dune 2's impact on AI and have a brief giveaway at the end. Cheers!More at retortai.com. Contact us at mail at domain.Some topics:- The pace of progress in AI and whether it feels meaningful or like "progress fatigue" to different groups- The role of hype and "vibes" in driving interest and investment in new AI models - Whether the value being created by large language models is actually just being concentrated in a few big tech companies- The debate around whether open source AI is feasible given the massive compute requirements- The limitations of "open letters" and events with Chatham House rules as forms of politics and accountability around AI- The analogy between the AI arms race and historical arms races like the dreadnought naval arms race- The role of narratives, pop culture, and "priesthoods" in shaping public understanding of AIChapters & transcript partially created with https://github.com/FanaHOVA/smol-podcaster.00:00 Introduction and the spirit of open source04:32 Historical parallels of technology arms races10:26 The practical use of language models and their impact on society22:21 The role and potential of open source in AI development28:05 The challenges of achieving coordination and scale in open AI development34:18 Pop culture's influence on the AI conversation, specifically through "Dune"
This week Tom and Nate cover all the big topics from the big picture lens. Sora, Gemini 1.5's context length, Gemini's bias backlash, Gemma open models, it was a busy week in AI. We come to the conclusion that we can no longer trust a lot of these big companies to do much. We are the gladiators playing the crowd of AI. This was a great one, I'm proud on one of Tom's all time best jokes. Thanks for listening, and reach out with any questions.
A metaphor episode! We are trying to figure how much the Waymo incident is or is not about AI. We bring back our Berkeley roots and talk about traditions in the Bay around distributed technology. Scooters and robots are not safe in this episode, sadly. Here's the link to the Verge piece Tom read from: https://www.theverge.com/2024/2/11/24069251/waymo-driverless-taxi-fire-vandalized-video-san-francisco-china-town
... and you should too. We catch up this week on all things Apple Vision Pro and how these devices will intersect with AI. It really turned more into a commentary on the future of society, and how various technologies may or may not tap into our subconscious. The only link we've got for you is DeepDream: https://en.wikipedia.org/wiki/DeepDream 
Wow, one of our favorites. This week Tom and Nate have a lot to cover. We cover AI2's new OPEN large language models (OLMo) and all that means, the alchemical model merging craze powering waifu factories, model weight leaks from Mistral, the calling card for our loyal fans, and more topics.We have a lot of links you'll enjoy as you'll go through it:The Mistral leak: https://huggingface.co/miqudev/miqu-1-70b/discussions/10 Writing on model merging: https://www.interconnects.ai/p/model-merging Writing on open LLMs: https://www.interconnects.ai/p/olmo The original mechanical turk: https://en.wikipedia.org/wiki/Mechanical_Turk This Waifu does not exist: https://thisanimedoesnotexist.ai/ The warriors film https://www.youtube.com/watch?v=--gdB-nnQkU The Waifu Research Department: https://huggingface.co/waifu-research-department
We recovered this episode from the depth of lost podcast recordings! We carry on and Tom tells the story of his wonderful sociology turned AI Ph.D. at Berkeley. This comes with plenty of great commentary on the current state of the field and striving for impact. We cover the riverbank of Vienna, the heart of the sperm whale, and deep life lessons. 
This week Tom and Nate catch up on two everlasting themes of ML: compute and evaluation. We chat about AI2, Zuck's GPUs, evaluation as procurement, NIST comments, neglecting reward models, and plenty of other topics. We're on the tracks for 2024 and waiting for some things to happen. Links for what we covered this week:Zuck interview on The VergeSaturday night live George Washington during revolutionary warNIST RFISam Altman's uncomfortable proposition
We're excited to bring you something special today! Our first cross over episode brings some fresh energy to the podcast. Tom and Nate are joined by Jordan Schneider of ChinaTalk (A popular Substack-based publication covering all things China https://www.chinatalk.media/). We cover lots of great ground here, from the economics of Hirschman to the competition from France. All good Patriots should listen to this episode, as we give a real assessment of where competition lies on the U.S.'s path to commercializing AI. Enjoy our best effort at a journal club!
Tom and Nate are ready to kick off the year, but not too ready! There's a ton to be excited about this year, but we're already worried for some parts of it. In this episode, we'll teach you how to be mindful of the so called "other side of ML".Some links:- Link to NYT lawsuit techdirt article https://www.techdirt.com/2023/12/28/the-ny-times-lawsuit-against-openai-would-open-up-the-ny-times-to-all-sorts-of-lawsuits-should-it-win/- Link to AI generated talk tool https://github.com/natolambert/interconnects-tools?tab=readme-ov-file#generated-research--video- They just want to learn https://twitter.com/hamishivi/status/1730633057999483085 and pod episode https://www.dwarkeshpatel.com/p/dario-amodei 
The end of the year is upon us! Tom and Nate bring a reflective mood to the podcast along with some surprises that may be a delight.Here are some links for the loyal fans:* RAND + executive order piece: https://www.politico.com/news/2023/12/15/billionaire-backed-think-tank-played-key-role-in-bidens-ai-order-00132128  * Sam Altman's blog post we were reading: https://blog.samaltman.com/what-i-wish-someone-had-told-me
No stone is left unturned on this episode. As the end of the year approaches, Tom and Nate check in on all the vibes of the machine learning world: torrents, faked demos, alchemy, weightlifting, actual science, and blogs are all not safe in this episode. Some links for your weekend:- AI Alliance: https://thealliance.ai/ - Evaluation gaming on Interconnects: https://www.interconnects.ai/p/evals-are-marketing- Fupi: https://www.youtube.com/watch?v=WtVknbxzn7Q
In this episode, Tom gives us a lesson on all things feedback, mostly where our scientific framings of it came from. Together, we link this to RLHF, our previous work in RL, and how we were thinking about agentic ML systems before it was cool.Join us, on another great blast from the past on The Retort!We also have brought you video this week!
We break down all the recent events of AI, and live react to some of the news about OpenAI's new super-method, codenamed Q*. From CEOs to rogue AI's, no one can be trusted in today's episode. Some links to relevant content on Interconnects:* Discussing how OpenAI's blunders open the doors for openness.* Detailing what Q* probably is.
We cover all things OpenAI as they embrace their role as a consumer technology company with their first developer keynote.Lots of links:Dev. day keynote https://www.youtube.com/watch?v=U9mJuUkhUzk Some papers we coverMultinational AGI consortium (by non technical folks) https://arxiv.org/abs/2310.09217 Frontier model risk paper that DC loves https://arxiv.org/abs/2307.03718 Our Choices, Risk, and Reward Reports paper https://cltc.berkeley.edu/reward-reports/ GPT 2 release blog with discussion of "dangers" of LLMs in 2019 https://openai.com/research/better-language-models 1984 Apple ad https://www.youtube.com/watch?v=VtvjbmoDx-I 
loading
Comments 
loading
Download from Google Play
Download from App Store