Discover
The Retort AI Podcast

11 Episodes
Reverse
We break down all the recent events of AI, and live react to some of the news about OpenAI's new super-method, codenamed Q*. From CEOs to rogue AI's, no one can be trusted in today's episode. Some links to relevant content on Interconnects:* Discussing how OpenAI's blunders open the doors for openness.* Detailing what Q* probably is.
We cover all things OpenAI as they embrace their role as a consumer technology company with their first developer keynote.Lots of links:
Dev. day keynote https://www.youtube.com/watch?v=U9mJuUkhUzk
Some papers we cover
Multinational AGI consortium (by non technical folks) https://arxiv.org/abs/2310.09217
Frontier model risk paper that DC loves https://arxiv.org/abs/2307.03718
Our Choices, Risk, and Reward Reports paper https://cltc.berkeley.edu/reward-reports/
GPT 2 release blog with discussion of "dangers" of LLMs in 2019 https://openai.com/research/better-language-models
1984 Apple ad https://www.youtube.com/watch?v=VtvjbmoDx-I
We discuss all the big regulation steps in AI this week, from the Biden Administration's Executive Order to the UK AI Safety Summit. Links:
Link the Executive Order
Link the Mozilla Open Letter
The Slaughterbots video
UK AI Safety Summit graph/meme
This week, we dunk on The Center for Research on Foundation Models's (Stanford) Foundation Model Transparency Index.Yes, the title is inspired by Taylor.Some links:
The Index itself. And Nathan's critique.
Anthropic's Collective Constitutional AI work, coverage in New York Times.
New paper motivating transparency for reward models in RLHF.
Jitendra Malik dunks on the idea of foundation models.
Tom and Nate sit down to discuss Marc Andreessen's Techno-Optimist Manifesto. A third wave of AI mindsets that squarely takes on both AI Safety and AI Ethics communities.Some links: * An example of the Shoggoth Monster we referenced.Thanks for listening!
This week, Tom and Nate discuss some of the core and intriguing dynamics of AI. We discuss the history of the rationality movement and where Harry Potter fan fiction fits in, if AI will ever not feel hypey, the do's and don'ts of Sam Altman, and other topics.(Editor note: sorry for some small issues in Nate's audio. That will be fixed in the next episode)Some links that are references:* HP MOR (Harry Potter and the Methods of Rationality). * A tweet referencing Sam Altman's funny (?) profile change.* Nathan's recent post on Interconnects on the job market craziness.
This is a big one. Getting going on if LLMs should be more open or more closed. We cover everything, OpenAI, scaling, openness for openness sake (relative to OpenAI), actual arguments for open-source values in LLMs, AI as infrastructure, LLMs as platforms, what this means we need, and other topics.Lot's of related links this time from Nathan.
Most recent article on Interconnects explaining how open-source startups may be deluding themselves.
"What is an open-source LLM" on Interconnects.
How the open-source economy works on Interconnects.
Tom and Nate discuss a few core topics of the show. First, we touch base on the core of the podcast -- the difference between empirical science, alchemy, and magic. Next, we explain some of our deeper understandings of AI safety as a field, then that leads into a discussion of what RLHF means.Lot's of links to share this time:
Tom's coverage on alchemy in VentureBeat, and an active thread on Twitter.
As Above, So Below: a calling of alchemy,
A NeurIPs test of time award speech on alchemy,
A bizarre Facebook debate between Yoshua Bengio and Stuart Russell,
Tom and Nate discuss some of the public institutions that form the bedrock of society -- education and roads -- and how AI is poised to shake them up.Some related reading on Interconnects, specifically about Tesla's system design and the self-driving roll-out in San Francisco.
Tom and Nate discuss some of the most dominant metaphors in machine learning these days -- alchemy and deep learning's roots, the Oppenheimer film and a modern "Manhattan Project for AI", and of course, a sprinkle of AGI.Some related reading on Interconnects: https://www.interconnects.ai/p/ai-research-tensions-oppenheimerThanks for listening! Reach out if you have any questions.
A brief introduction to the many problems facing AI and a sneak peak into episode 1, coming soon!