Discover
AI Safety Newsletter

AI Safety Newsletter
Author: Center for AI Safety
Subscribed: 14Played: 91Subscribe
Share
© 2025 All rights reserved
Description
Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
This podcast also contains narrations of some of our publications.
ABOUT US
The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.
Learn more at https://safe.ai
This podcast also contains narrations of some of our publications.
ABOUT US
The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.
Learn more at https://safe.ai
68 Episodes
Reverse
Also: Meta's Chatbot Policies Prompt Backlash Amid AI Reorganization; China Reverses Course on Nvidia H20 Purchases. In this edition: Big tech launches a $100 million pro-AI super PAC; Meta's chatbot policies prompt congressional scrutiny amid the company's AI reorganization; China reverses course on buying Nvidia H20 chips after comments by Secretary of Commerce Howard Lutnick. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Big Tech Launches $100 Million pro-AI Super PAC Silicon valley executives and investors are investing more than $100 million in a new political network to push back against AI regulations, signaling that the industry intends to be a major player in next year's U.S. midterms. The super PAC is backed by a16z and Greg Brockman and imitates the crypto super PAC Fairshake. The network, called Leading the Future, is modeled on the crypto-focused super-PAC Fairshake and aims to influence AI [...] ---Outline:(00:46) Big Tech Launches $100 Million pro-AI Super PAC(02:27) Meta's Chatbot Policies Prompt Backlash Amid AI Reorganization(04:45) China Reverses Course on Nvidia H20 Purchases(07:21) In Other News---
First published:
August 27th, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-62-big-tech
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: OpenAI releases GPT-5. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. OpenAI Releases GPT-5 Ever since GPT-4's release in March 2023 marked a step-change improvement over GPT-3, people have used ‘GPT-5’ as a stand-in to speculate about the next generation of AI capabilities. On Thursday, OpenAI released GPT-5. While state-of-the-art in most respects, GPT-5 is not a step-change improvement over competing systems, or even recent OpenAI models—but we shouldn’t have expected it to be. GPT-5 is state of the art in most respects. GPT-5 isn’t a single model like GPTs 1 through 4. It is a system of two models: a base model that answers questions quickly and is better at tasks like creative writing (an improved [...] ---Outline:(00:19) OpenAI Releases GPT-5(06:20) In Other News---
First published:
August 12th, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-61-openai-releases
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Also: ChatGPT Agent and IMO Gold. In this edition: The Trump Administration publishes its AI Action Plan; OpenAI released ChatGPT Agent and announced that an experimental model achieved gold medal-level performance on the 2025 International Mathematical Olympiad. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The AI Action Plan On the 23rd, the White House released its AI Action Plan. The document is the outcome of a January executive order that required the President's Science Advisor, ‘AI and Crypto Czar’, and National Security Advisor (currently Michael Kratsios, David Sacks, and Marco Rubio) to submit a plan to “sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” President Trump also delivered an hour-long speech on the plan, and signed three executive orders beginning to implement some of its policies.Trump displaying an executive order at the [...] ---Outline:(00:34) The AI Action Plan(07:36) ChatGPT Agent and IMO Gold(12:48) In Other News---
First published:
July 31st, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-60-the-ai-action
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus: Meta Superintelligence Labs. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: The EU published a General-Purpose AI Code of Practice for AI providers, and Meta is spending billions revamping its superintelligence development efforts. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. EU Publishes General-Purpose AI Code of Practice In June 2024, the EU adopted the AI Act, which remains the world's most significant law regulating AI systems. The Act bans some uses of AI like social scoring and predictive policing and limits other “high risk” uses such as generating credit scores or evaluating educational outcomes. It also regulates general-purpose AI (GPAI) systems, imposing transparency requirements, copyright protection policies, and safety and security standards for models that pose systemic risk (defined as those trained [...] ---Outline:(00:31) EU Publishes General-Purpose AI Code of Practice(04:50) Meta Superintelligence Labs(06:17) In Other News---
First published:
July 15th, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-59-eu-publishes
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus: Judges Split on Whether Training AI on Copyrighted Material is Fair Use. In this edition: The Senate removes a provision from Republican's “Big Beautiful Bill” aimed at restricting states from regulating AI; two federal judges split on whether training AI on copyrighted books in fair use. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Senate Removes State AI Regulation Moratorium The Senate removed a provision from Republican's “Big Beautiful Bill” aimed at restricting states from regulating AI. The moratorium would have prohibited states from receiving federal broadband expansion funds if they regulated AI—however, it faced procedural and political challenges in the Senate, and was ultimately removed in a vote of 99-1. Here's what happened. A watered-down moratorium cleared the Byrd Rule. In an attempt to bypass the Byrd Rule, which prohibits policy provisions in budget bills, the Senate Commerce Committee revised the [...] ---Outline:(00:35) Senate Removes State AI Regulation Moratorium(03:04) Judges Split on Whether Training AI on Copyrighted Material is Fair Use(07:19) In Other News---
First published:
July 3rd, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-58-senate-removes
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
In this edition: The New York Legislature passes an act regulating frontier AI—but it may not be signed into law for some time. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The RAISE Act New York may soon become the first state to regulate frontier AI systems. On June 12, the state's legislature passed the Responsible AI Safety and Education (RAISE) Act. If New York Governor Kathy Hochul signs it into law, the RAISE Act will be the most significant state AI legislation in the U.S. New York's RAISE Act imposes four guardrails on frontier labs: developers must publish a safety plan, hold back unreasonably risky models, disclose major incidents, and face penalties for non-compliance. Publish and maintain a safety plan. Before deployment, developers must post a redacted “safety and security protocol,” transmit the plan to both the attorney general and the [...] ---Outline:(00:21) The RAISE Act(04:43) In Other News---
First published:
June 17th, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-57-the-raise
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, Opus 4 Demonstrates the Fragility of Voluntary Governance. In this edition: Google released a frontier video generation model at its annual developer conference; Anthropic's Claude Opus 4 demonstrates the danger of relying on voluntary governance. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Google Releases Veo 3 Last week, Google made several AI announcements at I/O 2025, its annual developer conference. An announcement of particular note is Veo 3, Google's newest video generation model. Frontier video and audio generation. Veo 3 outperforms other models on human preference benchmarks, and generates both audio and video.Google showcasing a video generated with Veo 3. (Source) If you just look at benchmarks, Veo 3 is a substantial improvement over other systems. But relative benchmark improvement only tells part of the story—the absolute capabilities of systems ultimately determine their usefulness. Veo 3 looks like a marked qualitative [...] ---Outline:(00:33) Google Releases Veo 3(03:25) Opus 4 Demonstrates the Fragility of Voluntary Governance---
First published:
May 28th, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-56-google-releases
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, Bills on Whistleblower Protections, Chip Location Verification, and State Preemption. In this edition: The Trump Administration rescinds the Biden-era AI diffusion rule and sells AI chips to the UAE and Saudi Arabia; Federal lawmakers propose legislation on AI whistleblowers, location verification for AI chips, and prohibiting states from regulating AI. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Center for AI Safety is also excited to announce the Summer session of our AI Safety, Ethics, and Society course, running from June 23 to September 14. The course, based on our recently published textbook, is open to participants from all disciplines and countries, and is designed to accommodate full-time work or study. Applications for the Summer 2025 course are now open. The final application deadline is May 30th. Visit the course website to learn more and apply. Trump Administration Rescinds AI Diffusion [...] ---Outline:(01:12) Trump Administration Rescinds AI Diffusion Rule, Allows Chip Sales to Gulf States(04:14) Bills on Whistleblower Protections, Chip Location Verification, and State Preemption(06:56) In Other News---
First published:
May 20th, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-55-trump-administration
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, AI Safety Collaboration in Singapore. In this edition: OpenAI claims an updated restructure plan would preserve nonprofit control; A global coalition meets in Singapore to propose a research agenda for AI safety. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. OpenAI Updates Restructure Plan On May 5th, OpenAI announced a new restructure plan. The announcement walks back a December 2024 proposal that would have had OpenAI's nonprofit—which oversees the company's for-profit operations—sell its controlling shares to the for-profit side of the company. That plan drew sharp criticism from former employees and civil‑society groups and prompted a lawsuit from co‑founder Elon Musk, who argued OpenAI was abandoning its charitable mission. OpenAI claims the new plan preserves nonprofit control, but is light on specifics. Like the original plan, OpenAI's new plan would have OpenAI Global LLC become a public‑benefit corporation (PBC). However, instead of the nonprofit selling its [...] ---Outline:(00:31) OpenAI Updates Restructure Plan(03:19) AI Safety Collaboration in Singapore(05:42) In Other News---
First published:
May 13th, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-54-openai-updates
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, SafeBench Winners. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: Experts and ex-employees urge the Attorneys General of California and Delaware to block OpenAI's for-profit restructure; CAIS announces the winners of its safety benchmarking competition. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. An Open Letter Attempts to Block OpenAI Restructuring A group of former OpenAI employees and independent experts published an open letter urging the Attorneys General (AGs) of California (where OpenAI operates) and Delaware (where OpenAI is incorporated) to block OpenAI's planned restructuring into a for-profit entity. The letter argues the move would fundamentally undermine the organization's charitable mission by jeopardizing the governance safeguards designed to protect control over AGI from profit motives. OpenAI was founded with the charitable purpose to [...] ---Outline:(00:32) An Open Letter Attempts to Block OpenAI Restructuring(04:24) SafeBench Winners(08:59) Other News---
First published:
April 29th, 2025
Source:
https://newsletter.safe.ai/p/an-open-letter-attempts-to-block
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, AI-Enabled Coups. In this edition: AI now outperforms human experts in specialized virology knowledge in a new benchmark; A new report explores the risk of AI-enabled coups. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. An Expert Virology Benchmark A team of researchers (primarily from SecureBio and CAIS) has developed the Virology Capabilities Test (VCT), a benchmark that measures an AI system's ability to troubleshoot complex virology laboratory protocols. Results on this benchmark suggest that AI has surpassed human experts in practical virology knowledge. VCT measures practical virology knowledge, which has high dual-use potential. While AI virologists could accelerate beneficial research in virology and infectious disease prevention, bad actors could misuse the same capabilities to develop dangerous pathogens. Like the WMDP benchmark, the VCT is designed to evaluate practical dual-use scientific knowledge—in this case, virology. The benchmark consists of 322 multimodal questions [...] ---Outline:(00:29) An Expert Virology Benchmark(04:04) AI-Enabled Coups(07:58) Other news---
First published:
April 22nd, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-52-an-expert
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, AI 2027. In this newsletter, we cover the launch of AI Frontiers, a new forum for expert commentary on the future of AI. We also discuss AI 2027, a detailed scenario describing how artificial superintelligence might emerge in just a few years. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. AI Frontiers Last week, CAIS introduced AI Frontiers, a new publication dedicated to gathering expert views on AI's most pressing questions. AI's impacts are wide-ranging, affecting jobs, health, national security, and beyond. Navigating these challenges requires a forum for varied viewpoints and expertise. In this story, we’d like to highlight the publication's initial articles to give you a taste of the kind of coverage you can expect from AI Frontiers. Why Racing to Artificial Superintelligence Would Undermine America's National Security. Researchers Corin Katzke (also an author of this newsletter) and Gideon Futerman [...] ---Outline:(00:33) AI Frontiers(05:01) AI 2027(10:02) Other News---
First published:
April 15th, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-51-ai-frontiers
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, Detecting Misbehavior in Reasoning Models. In this newsletter, we cover AI companies’ responses to the federal government's request for information on the development of an AI Action Plan. We also discuss an OpenAI paper on detecting misbehavior in reasoning models by monitoring their chains of thought. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. On January 23, President Trump signed an executive order giving his administration 180 days to develop an “AI Action Plan” to “enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” Despite the rhetoric of the order, the Trump administration has yet to articulate many policy positions with respect to AI development and safety. In a recent interview, Ben Buchanan—Biden's AI advisor—interpreted the executive order as giving the administration time to develop its AI policies. The AI Action Plan will therefore likely [...] ---
First published:
March 31st, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-50-ai-action
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, Detecting Misbehavior in Reasoning Models. In this newsletter, we cover AI companies’ responses to the federal government's request for information on the development of an AI Action Plan. We also discuss an OpenAI paper on detecting misbehavior in reasoning models by monitoring their chains of thought. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. On January 23, President Trump signed an executive order giving his administration 180 days to develop an “AI Action Plan” to “enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” Despite the rhetoric of the order, the Trump administration has yet to articulate many policy positions with respect to AI development and safety. In a recent interview, Ben Buchanan—Biden's AI advisor—interpreted the executive order as giving the administration time to develop its AI policies. The AI Action Plan will therefore likely [...] ---
First published:
March 31st, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-49-ai-action
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, Measuring AI Honesty. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this newsletter, we discuss two recent papers: a policy paper on national security strategy, and a technical paper on measuring honesty in AI systems. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Superintelligence Strategy CAIS director Dan Hendrycks, former Google CEO Eric Schmidt, and Scale AI CEO Alexandr Wang have authored a new paper, Superintelligence Strategy. The paper (and an in-depth expert version) argues that the development of superintelligence—AI systems that surpass humans in nearly every domain—is inescapably a matter of national security. In this story, we introduce the paper's three-pronged strategy for national security in the age of advanced AI: deterrence, nonproliferation, and competitiveness. Deterrence The simultaneous power and danger of superintelligence presents [...] ---Outline:(00:20) Superintelligence Strategy(01:09) Deterrence(02:41) Nonproliferation(04:04) Competitiveness(05:33) Measuring AI Honesty(09:24) Links---
First published:
March 6th, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-49-superintelligence
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Superintelligence is destabilizing since it threatens other states’ survival—it could be weaponized, or states may lose control of it. Attempts to build superintelligence may face threats by rival states—creating a deterrence regime called Mutual Assured AI Malfunction (MAIM). In this paper, Dan Hendrycks, Eric Schmidt, and Alexandr Wang detail a strategy—focused on deterrence, nonproliferation, and competitiveness—for nations to navigate the risks of superintelligence.
Superintelligence is destabilizing since it threatens other states’ survival—it could be weaponized, or states may lose control of it. Attempts to build superintelligence may face threats by rival states—creating a deterrence regime called Mutual Assured AI Malfunction (MAIM). In this paper, Dan Hendrycks, Eric Schmidt, and Alexandr Wang detail a strategy—focused on deterrence, nonproliferation, and competitiveness—for nations to navigate the risks of superintelligence.
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. In this newsletter, we explore two recent papers from CAIS. We’d also like to highlight that CAIS is hiring for editorial and writing roles, including for a new online platform for journalism and analysis regarding AI's impacts on national security, politics, and economics. Utility Engineering A common view is that large language models (LLMs) are highly capable but fundamentally passive tools, shaping their responses based on training data without intrinsic goals or values. However, a new paper from the Center for AI Safety challenges this assumption, showing that LLMs exhibit coherent and structured value systems. Structured preferences emerge with scale. The paper introduces Utility Engineering, a framework for analyzing and controlling AI [...] ---Outline:(00:26) Utility Engineering(04:48) EnigmaEval---
First published:
February 18th, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-48-utility-engineering
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, State-Sponsored AI Cyberattacks. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Reasoning Models DeepSeek-R1 has been one of the most significant model releases since ChatGPT. After its release, the DeepSeek's app quickly rose to the top of Apple's most downloaded chart and NVIDIA saw a 17% stock decline. In this story, we cover DeepSeek-R1, OpenAI's o3-mini and Deep Research, and the policy implications of reasoning models. DeepSeek-R1 is a frontier reasoning model. DeepSeek-R1 builds on the company's previous model, DeepSeek-V3, by adding reasoning capabilities through reinforcement learning training. R1 exhibits frontier-level capabilities in mathematics, coding, and scientific reasoning—comparable to OpenAI's o1. DeepSeek-R1 also scored 9.4% on Humanity's Last Exam—at the time of its release, the highest of any publicly available system. DeepSeek reports spending only about $6 million on the computing power needed to train V3—however, that number doesn’t include the full [...] ---Outline:(00:13) Reasoning Models(04:58) State-Sponsored AI Cyberattacks(06:51) Links---
First published:
February 6th, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-47-reasoning
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, Humanity's Last Exam, and the AI Safety, Ethics, and Society Course. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Transition The transition from the Biden to Trump administrations saw a flurry of executive activity on AI policy, with Biden signing several last-minute executive orders and Trump revoking Biden's 2023 executive order on AI risk. In this story, we review the state of play. Trump signing first-day executive orders. Source. The AI Diffusion Framework. The final weeks of the Biden Administration saw three major actions related to AI policy. First, the Bureau of Industry and Security released its Framework for Artificial Intelligence Diffusion, which updates the US’ AI-related export controls. The rule establishes three tiers of countries 1) US allies, 2) most other countries, and 3) arms-embargoed countries. Companies headquartered in tier-1 countries can freely deploy AI chips in other [...] ---Outline:(00:16) The Transition(04:38) CAIS and Scale AI Introduce Humanitys Last Exam(08:03) AI Safety, Ethics, and Society Course(09:21) Links---
First published:
January 23rd, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-46-the-transition
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Comments