DiscoverThe Nonlinear Library: LessWrongLW - MIRI's September 2024 newsletter by Harlan
LW - MIRI's September 2024 newsletter by Harlan

LW - MIRI's September 2024 newsletter by Harlan

Update: 2024-09-17
Share

Description

Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's September 2024 newsletter, published by Harlan on September 17, 2024 on LessWrong.

MIRI updates

Aaron Scher and Joe Collman have joined the Technical Governance Team at MIRI as researchers. Aaron previously did independent research related to sycophancy in language models and mechanistic interpretability, while Joe previously did independent research related to AI safety via debate and contributed to field-building work at MATS and BlueDot Impact.

In an interview with PBS News Hour's Paul Solman, Eliezer Yudkowsky briefly explains why he expects smarter-than-human AI to cause human extinction.

In an interview with The Atlantic's Ross Andersen, Eliezer discusses the reckless behavior of the leading AI companies, and the urgent need to change course.

News and links

Google DeepMind announced a hybrid AI system capable of solving International Mathematical Olympiad problems at the silver medalist level. In the wake of this development, a Manifold prediction market significantly increased its odds that AI will achieve gold level by 2025, a milestone that Paul Christiano gave less than 8% odds and Eliezer gave at least 16% odds to in 2021.

The computer scientist Yoshua Bengio discusses and responds to some common arguments people have for not worrying about the AI alignment problem.

SB 1047, a California bill establishing whistleblower protections and mandating risk assessments for some AI developers, has passed the State Assembly and moved on to the desk of Governor Gavin Newsom, to either be vetoed or passed into law. The bill has received opposition from several leading AI companies, but has also received support from a number of employees of those companies, as well as many academic researchers.

At the time of this writing, prediction markets think it's about 50% likely that the bill will become law.

In a new report, researchers at Epoch AI estimate how big AI training runs could get by 2030, based on current trends and potential bottlenecks. They predict that by the end of the decade it will be feasible for AI companies to train a model with 2e29 FLOP, which is about 10,000 times the amount of compute used to train GPT-4.

Abram Demski, who previously worked at MIRI as part of our recently discontinued Agent Foundations research program, shares an update about his independent research plans, some thoughts on public vs private research, and his current funding situation.

You can subscribe to the MIRI Newsletter here.

Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Comments 
In Channel
LW - GPT-4o1 by Zvi

LW - GPT-4o1 by Zvi

2024-09-1601:13:31

loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

LW - MIRI's September 2024 newsletter by Harlan

LW - MIRI's September 2024 newsletter by Harlan

Harlan