DiscoverAI Safety Newsletter
Claim Ownership
AI Safety Newsletter
Author: Centre for AI Safety
Subscribed: 8Played: 45Subscribe
Share
© 2024 All rights reserved
Description
Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
This podcast also contains narrations of some of our publications.
ABOUT US
The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.
Learn more at https://safe.ai
This podcast also contains narrations of some of our publications.
ABOUT US
The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.
Learn more at https://safe.ai
47 Episodes
Reverse
Plus, Chinese researchers used Llama to create a military tool for the PLA, a Google AI system discovered a zero-day cybersecurity vulnerability, and Complex Systems. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Trump Circle on AI Safety The incoming Trump administration is likely to significantly alter the US government's approach to AI safety. For example, Trump is likely to immediately repeal Biden's Executive Order on AI. However, some of Trump's circle appear to take AI safety seriously. The most prominent AI safety advocate close to Trump is Elon Musk, who earlier this year supported SB 1047. However, he is not alone. Below, we’ve gathered some promising perspectives from other members of Trump's circle and incoming administration. Trump and Musk at UFC 309. Photo Source. Robert F. Kennedy Jr, Trump's pick for Secretary of Health and Human Services, said in [...] ---Outline:(00:24) The Trump Circle on AI Safety(02:41) Chinese Researchers Used Llama to Create a Military Tool for the PLA(04:14) A Google AI System Discovered a Zero-Day Cybersecurity Vulnerability(05:27) Complex Systems(08:54) LinksThe original text contained 1 image which was described by AI. ---
First published:
November 19th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-44-the-trump
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, AI and Job Displacement, and AI Takes Over the Nobels. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. White House Issues First National Security Memo on AI On October 24, 2024, the White House issued the first National Security Memorandum (NSM) on Artificial Intelligence, accompanied by a Framework to Advance AI Governance and Risk Management in National Security. The NSM identifies AI leadership as a national security priority. The memorandum states that competitors have employed economic and technological espionage to steal U.S. AI technology. To maintain a U.S. advantage in AI, the memorandum directs the National Economic Council to assess the U.S.'s competitive position in: Semiconductor design and manufacturing Availability of computational resources Access to workers highly skilled in AI Capital availability for AI development The Intelligence Community must make gathering intelligence on competitors' operations against the [...] ---Outline:(00:18) White House Issues First National Security Memo on AI(03:22) AI and Job Displacement(09:13) AI Takes Over the NobelsThe original text contained 2 images which were described by AI. ---
First published:
October 28th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-43-white-house
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, OpenAI's o1, and AI Governance Summary. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Newsom Vetoes SB 1047 On Sunday, Governor Newsom vetoed California's Senate Bill 1047 (SB 1047), the most ambitious legislation to-date aimed at regulating frontier AI models. The bill, introduced by Senator Scott Wiener and covered in a previous newsletter, would have required AI developers to test frontier models for hazardous capabilities and take steps to mitigate catastrophic risks. (CAIS Action Fund was a co-sponsor of SB 1047.) Newsom states that SB 1047 is not comprehensive enough. In his letter to the California Senate, the governor argued that “SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves [...] ---Outline:(00:18) Newsom Vetoes SB 1047(01:55) OpenAI's o1(06:44) AI Governance(10:32) LinksThe original text contained 3 images which were described by AI. ---
First published:
October 1st, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-42-newsom-vetoes
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, Ranking Models by Susceptibility to Jailbreaking, and Machine Ethics. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Next Generation of Compute Scale AI development is on the cusp of a dramatic expansion in compute scale. Recent developments across multiple fronts—from chip manufacturing to power infrastructure—point to a future where AI models may dwarf today's largest systems. In this story, we examine key developments and their implications for the future of AI compute. xAI and Tesla are building massive AI clusters. Elon Musk's xAI has brought its Memphis supercluster—“Colossus”—online. According to Musk, the cluster has 100k Nvidia H100s, making it the largest supercomputer in the world. Moreover, xAI plans to add 50k H200s in the next few months. For comparison, Meta's Llama 3 was trained on 16k H100s. Meanwhile, Tesla's “Gigafactory Texas” is expanding to house an AI supercluster. Tesla's Gigafactory supercomputer [...] ---Outline:(00:18) The Next Generation of Compute Scale(04:36) Ranking Models by Susceptibility to Jailbreaking(06:07) Machine EthicsThe original text contained 1 image which was described by AI. ---
First published:
September 11th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-41-the-next
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, NVIDIA Delays Chip Production, and Do AI Safety Benchmarks Actually Measure Safety?. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. SB 1047, the Most-Discussed California AI Legislation California's Senate Bill 1047 has sparked discussion over AI regulation. While state bills often fly under the radar, SB 1047 has garnered attention due to California's unique position in the tech landscape. If passed, SB 1047 would apply to all companies performing business in the state, potentially setting a precedent for AI governance more broadly. This newsletter examines the current state of the bill, which has had various amendments in response to feedback from various stakeholders. We'll cover recent debates surrounding the bill, support from AI experts, opposition from the tech industry, and public opinion based on polling. The bill mandates safety protocols, testing procedures, and reporting requirements for covered AI models. The bill was [...] ---Outline:(00:18) SB 1047, the Most-Discussed California AI Legislation(04:38) NVIDIA Delays Chip Production(06:49) Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?(10:22) LinksThe original text contained 1 image which was described by AI. ---
First published:
August 21st, 2024
Source:
https://newsletter.safe.ai/p/aisn-40-california-ai-legislation
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, Safety Engineering Overview. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Implications of a Trump administration for AI policy Trump named Ohio Senator J.D. Vance—an AI regulation skeptic—as his pick for vice president. This choice sheds light on the AI policy landscape under a future Trump administration. In this story, we cover: (1) Vance's views on AI policy, (2) views of key players in the administration, such as Trump's party, donors, and allies, and (3) why AI safety should remain bipartisan. Vance has pushed for reducing AI regulations and making AI weights open. At a recent Senate hearing, Vance accused Big Tech companies of overstating risks from AI in order to justify regulations to stifle competition. This led tech policy experts to expect that Vance would favor looser AI regulations. However, Vance has also praised Lina Khan, Chair of the Federal Trade [...] ---Outline:(00:18) Implications of a Trump administration for AI policy(04:57) Safety Engineering(08:49) LinksThe original text contained 2 images which were described by AI. ---
First published:
July 29th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-39-implications
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Plus, “Circuit Breakers” for AI systems, and updates on China's AI industry. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Supreme Court Decision Could Limit Federal Ability to Regulate AI In a recent decision, the Supreme Court overruled the 1984 precedent Chevron v. Natural Resources Defence Council. In this story, we discuss the decision's implications for regulating AI. Chevron allowed agencies to flexibly apply expertise when regulating. The “Chevron doctrine” had required courts to defer to a federal agency's interpretation of a statute in the case that that statute was ambiguous and the agency's interpretation was reasonable. Its elimination curtails federal agencies’ ability to regulate—including, as this article from LawAI explains, their ability to regulate AI. The Chevron doctrine expanded federal agencies’ ability to regulate in at least two ways. First, agencies could draw on their technical expertise to interpret ambiguous statutes [...] The original text contained 1 image which was described by AI. ---
First published:
July 9th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-38-supreme-court
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
US Launches Antitrust Investigations The U.S. Government has launched antitrust investigations into Nvidia, OpenAI, and Microsoft. The U.S. Department of Justice (DOJ) and Federal Trade Commission (FTC) have agreed to investigate potential antitrust violations by the three companies, the New York Times reported. The DOJ will lead the investigation into Nvidia while the FTC will focus on OpenAI and Microsoft. Antitrust investigations are conducted by government agencies to determine whether companies are engaging in anticompetitive practices that may harm consumers and stifle competition. Nvidia investigated for GPU dominance. The New York Times reports that concerns have been raised about Nvidia's dominance in the GPU market, “including how the company's software locks [...] ---Outline:(00:10) US Launches Antitrust Investigations(02:58) Recent Criticisms of OpenAI and Anthropic(05:40) Situational Awareness(09:14) Links---
First published:
June 18th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-37-us-launches
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
Voluntary Commitments are Insufficient AI companies agree to RSPs in Seoul. Following the second AI Global Summit held in Seoul, the UK and Republic of Korea governments announced that 16 major technology organizations, including Amazon, Google, Meta, Microsoft, OpenAI, and xAI have agreed to a new set of Frontier AI Safety Commitments. Some commitments from the agreement include: Assessing risks posed by AI models and systems throughout the AI lifecycle. Setting thresholds for severe risks, defining when a model or system would pose intolerable risk if not adequately mitigated. Keeping risks within defined thresholds, such as by modifying system behaviors and implementing robust security controls. Potentially halting development or deployment if risks cannot be sufficiently mitigated. These commitments [...] ---Outline:(00:03) Voluntary Commitments are Insufficient(02:45) Senate AI Policy Roadmap(05:18) Chapter 1: Overview of Catastrophic Risks(07:56) Links---
First published:
May 30th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-35-voluntary
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
OpenAI and Google Announce New Multimodal Models In the current paradigm of AI development, there are long delays between the release of successive models. Progress is largely driven by increases in computing power, and training models with more computing power requires building large new data centers. More than a year after the release of GPT-4, OpenAI has yet to release GPT-4.5 or GPT-5, which would presumably be trained on 10x or 100x more compute than GPT-4, respectively. These models might be released over the next year or two, and could represent large spikes in AI capabilities. But OpenAI did announce a new model last week, called GPT-4o. The “o” stands for “omni,” referring to the fact that the model can use text, images, videos [...] ---Outline:(00:03) OpenAI and Google Announce New Multimodal Models(02:36) The Surge in AI Lobbying(05:29) How Should Copyright Law Apply to AI Training Data?(10:10) Links---
First published:
May 16th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-35-lobbying
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. AI Labs Fail to Uphold Safety Commitments to UK AI Safety Institute In November, leading AI labs committed to sharing their models before deployment to be tested by the UK AI Safety Institute. But reporting from Politico shows that these commitments have fallen through. OpenAI, Anthropic, and Meta have all failed to share their models with the UK AISI before deployment. Only Google DeepMind, headquartered in London, has given pre-deployment access to UK AISI. Anthropic released the most powerful publicly available language model, Claude 3, without any window for pre-release testing by the UK AISI. When asked for comment, Anthropic co-founder Jack Clark said, “Pre-deployment testing is a nice idea but very difficult to implement.” When asked about their concerns with pre-deployment testing [...] ---Outline:(00:03) AI Labs Fail to Uphold Safety Commitments to UK AI Safety Institute(02:17) New Bipartisan AI Policy Proposals in the US Senate(06:35) Military AI in Israel and the US(11:44) New Online Course on AI Safety from CAIS(12:38) Links---
First published:
May 1st, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-34-new-military
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This week, we cover: Consolidation in the corporate AI landscape, as smaller startups join forces with larger funders. Several countries have announced new investments in AI, including Singapore, Canada, and Saudi Arabia. Congress's budget for 2024 provides some but not all of the requested funding for AI policy. The White House's 2025 proposal makes more ambitious requests for AI funding. How will AI affect biological weapons risk? We reexamine this question in light of new experiments from RAND, OpenAI, and others. AI Startups Seek Support From Large Financial Backers As AI development demands ever-increasing compute resources, only well-resourced developers can compete at the frontier. In practice, this means that AI startups must either partner with the world's [...] ---Outline:(00:45) AI Startups Seek Support From Large Financial Backers(03:47) National AI Investments(05:16) Federal Spending on AI(08:35) An Updated Assessment of AI and Biorisk(15:35) $250K in Prizes: SafeBench Competition Announcement(16:08) Links---
First published:
April 11th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-33-reassessing
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Measuring and Reducing Hazardous Knowledge The recent White House Executive Order on Artificial Intelligence highlights risks of LLMs in facilitating the development of bioweapons, chemical weapons, and cyberweapons. To help measure these dangerous capabilities, CAIS has partnered with Scale AI to create WMDP: the Weapons of Mass Destruction Proxy, an open source benchmark with more than 4,000 multiple choice questions that serve as proxies for hazardous knowledge across biology, chemistry, and cyber. This benchmark not only helps the world understand the relative dual-use capabilities of different LLMs, but it also creates a path forward for model builders to remove harmful information from their models through machine unlearning techniques. Measuring hazardous knowledge in bio, chem, and cyber. Current evaluations of dangerous AI capabilities have [...] ---Outline:(00:03) Measuring and Reducing Hazardous Knowledge(04:35) Language models are getting better at forecasting(07:51) Proposals for Private Regulatory Markets(14:25) Links---
First published:
March 7th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-32-measuring
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This week, we’ll discuss: A new proposed AI bill in California which requires frontier AI developers to adopt safety and security protocols, and clarifies that developers bear legal liability if their AI systems cause unreasonable risks or critical harms to public safety. Precedents for AI governance from healthcare and biosecurity. The EU AI Act and job opportunities at their enforcement agency, the AI Office. A New Bill on AI Policy in California Several leading AI companies have public plans for how they’ll invest in safety and security as they develop more dangerous AI systems. A new bill in California's state legislature would codify this practice as a legal requirement, and clarify the legal liability faced by developers [...] ---Outline:(00:33) A New Bill on AI Policy in California(04:38) Precedents for AI Policy: Healthcare and Biosecurity(07:56) Enforcing the EU AI Act(08:55) Links---
First published:
February 21st, 2024
Source:
https://newsletter.safe.ai/p/aisn-31-a-new-ai-policy-bill-in-california
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Compute Investments Continue To Grow Pausing AI development has been proposed as a policy for ensuring safety. For example, an open letter last year from the Future of Life Institute called for a six-month pause on training AI systems more powerful than GPT-4. But one interesting fact about frontier AI development is that it comes with natural pauses that can last many months or years. After releasing a frontier model, it takes time for AI developers to construct new compute clusters with larger numbers of more advanced computer chips. The supply of compute is currently unable to keep up with demand, meaning some AI developers cannot buy enough chips for their needs. This explains why OpenAI was reportedly limited by GPUs last year. [...] ---Outline:(00:06) Compute Investments Continue To Grow(03:48) Developments in Military AI(07:19) Japan and Singapore Support AI Safety(08:57) Links---
First published:
January 24th, 2024
Source:
https://newsletter.safe.ai/p/aisn-30-investments-in-compute-and
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. A Provisional Agreement on the EU AI Act On December 8th, the EU Parliament, Council, and Commission reached a provisional agreement on the EU AI Act. The agreement regulates the deployment of AI in high risk applications such as hiring and credit pricing, and it bans private companies from building and deploying AI for unacceptable applications such as social credit scoring and individualized predictive policing. Despite lobbying by some AI startups against regulation of foundation models, the agreement contains risk assessment and mitigation requirements for all general purpose AI systems. Specific requirements apply to AI systems trained with >1025 FLOP such as Google's Gemini and OpenAI's GPT-4. Minimum basic transparency requirements for all GPAI. The provisional agreement regulates foundation models — using the [...] ---Outline:(00:06) A Provisional Agreement on the EU AI Act(04:55) Questions about Research Standards in AI Safety(06:48) The New York Times sues OpenAI and Microsoft for Copyright Infringement(10:34) Links---
First published:
January 4th, 2024
Source:
https://newsletter.safe.ai/p/aisn-29-progress-on-the-eu-ai-act
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This week we’re looking closely at AI legislative efforts in the United States, including: Senator Schumer's AI Insight Forum The Blumenthal-Hawley framework for AI governance Agencies proposed to govern digital platforms State and local laws against AI surveillance The National AI Research Resource (NAIRR) Senator Schumer's AI Insight Forum The CEOs of more than a dozen major AI companies gathered in Washington on Wednesday for a hearing with the Senate. Organized by Democratic Majority Leader Chuck Schumer and a bipartisan group of Senators, this was the first of many hearings in their AI Insight Forum. After the hearing, Senator Schumer said, “I asked everyone in the room, ‘Is government needed to play a role in regulating AI?’ and [...] ---Outline:(00:30) Senator Schumer's AI Insight Forum(01:20) The Blumenthal-Hawley Framework(03:09) Agencies Proposed to Govern Digital Platforms(04:46) Deepfakes and Watermarking Legislation(06:12) State and Local Laws Against AI Surveillance(06:52) National AI Research Resource (NAIRR)(08:18) Links---
First published:
September 19th, 2023
Source:
https://newsletter.safe.ai/p/the-landscape-of-us-ai-legislation
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
As 2023 comes to a close, we want to thank you for your continued support for AI safety. This has been a big year for AI and for the Center for AI Safety. In this special-edition newsletter, we highlight some of our most important projects from the year. Thank you for being part of our community and our work. Center for AI Safety's 2023 Year in Review The Center for AI Safety (CAIS) is on a mission to reduce societal-scale risks from AI. We believe this requires research and regulation. These both need to happen quickly (due to unknown timelines on AI progress) and in tandem (because either one is insufficient on its own). To achieve this, we pursue three pillars of work: research, field-building, and advocacy. Research CAIS conducts both technical and conceptual research on AI safety. We pursue multiple overlapping strategies which can be layered together [...] ---Outline:(00:27) Center for AI Safety's 2023 Year in Review(00:56) Research(03:37) Field-Building(07:35) Advocacy(10:04) Looking Ahead(10:23) Support Our Work---
First published:
December 21st, 2023
Source:
https://newsletter.safe.ai/p/aisn-28-center-for-ai-safety-2023
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Defensive Accelerationism Vitalik Buterin, the creator of Ethereum, recently wrote an essay on the risks and opportunities of AI and other technologies. He responds to Marc Andreessen's manifesto on techno-optimism and the growth of the effective accelerationism (e/acc) movement, and offers a more nuanced perspective. Technology is often great for humanity, the essay argues, but AI could be an exception to that rule. Rather than giving governments control of AI so they can protect us, Buterin argues that we should build defensive technologies that provide security against catastrophic risks in a decentralized society. Cybersecurity, biosecurity, resilient physical infrastructure, and a robust information ecosystem are some of the technologies Buterin believes we should build to protect ourselves from AI risks. Technology has risks, but [...] ---Outline:(00:06) Defensive Accelerationism(03:55) Retrospective on the OpenAI Board Saga(07:58) Klobuchar and Thune's “light-touch” Senate bill(10:23) Links---
First published:
December 7th, 2023
Source:
https://newsletter.safe.ai/p/aisn-27-defensive-accelerationism
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
Also, Results From the UK Summit, and New Releases From OpenAI and xAI. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.This week's key stories include: The UK, US, and Singapore have announced national AI safety institutions. The UK AI Safety Summit concluded with a consensus statement, the creation of an expert panel to study AI risks, and a commitment to meet again in six months. xAI, OpenAI, and a new Chinese startup released new models this week. UK, US, and Singapore Establish National AI Safety InstitutionsBefore regulating a new technology, governments often need time to gather information and consider their policy options. But during that time, the technology may diffuse through society, making it more difficult for governments to intervene. This process, termed the Collingridge Dilemma, is a fundamental challenge in technology policy.But recently [...] ---Outline:(00:36) UK, US, and Singapore Establish National AI Safety Institutions(03:53) UK Summit Ends with Consensus Statement and Future Commitments(05:39) New Models From xAI, OpenAI, and a New Chinese Startup(09:28) Links---
First published:
November 15th, 2023
Source:
https://newsletter.safe.ai/p/national-institutions-for-ai-safety
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
Comments
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
United States