AF - Priorities for the UK Foundation Models Taskforce by Andrea Miotti
Update: 2023-07-21
Description
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Priorities for the UK Foundation Models Taskforce, published by Andrea Miotti on July 21, 2023 on The AI Alignment Forum.
The UK government recently established the Foundation Models Taskforce, focused on AI safety, modelled on the Vaccine Taskforce, and backed by £100M in funding. Founder, investor and AI expert Ian Hogarth leads the new organization.
The establishment of the Taskforce shows the UK's intention to be a leading player in the greatest governance challenge of our times: keeping humanity in control of a future with increasingly powerful AIs. This is no small feat, and will require very ambitious policies that anticipate the rapid developments in the AI field, rather than just reacting to them.
Here are some recommendations on what the Taskforce should do. The recommendations fall into three categories: Communication and Education about AI risk, International Coordination, and Regulation and Monitoring.
Communication and Education about AI Risk
The Taskforce is uniquely positioned to educate and communicate about AI development and risks. Here is how it could do it:
Private education
The Taskforce should organize private education sessions for UK Members of Parliament, Lords, and high-ranking civil servants, in the form of presentations, workshops, and closed-door Q&As with Taskforce experts. These would help bridge the information gap between policymakers and the fast-moving AI field.
A new platform: ai.gov.uk
The Taskforce should take a proactive role in disseminating knowledge about AI progress, the state of the AI field, and the Taskforce's own actions:
The Taskforce should publish bi-weekly or monthly Bulletins and Reports on AI on an official government website. The Taskforce can start doing this right away by publishing its bi-weekly or monthly bulletins and reports on the state of AI progress and AI risk on the UK government's research and statistics portal.
The Taskforce should set up ai.gov.uk, an online platform modeled after the UK's COVID-19 dashboard. The platform's main page should be a dashboard showing key information about AI progress and Taskforce progress in achieving its goals, that gets updated regularly. ai.gov.uk should have a progress bar trending towards 100% for all of the Task Force's key objectives.
ai.gov.uk should also include a "Safety Plans of AI Companies" monthly report, with key insights visualized on the dashboard.
The Taskforce should send an official questionnaire to each frontier AI company to compile this report. This questionnaire should contain questions about companies' estimated risk of human extinction caused by the development of their AIs, their timelines until the existence of powerful and autonomous AI systems, and their safety plans regarding development and deployment of frontier AI models.
There is no need to make the questionnaire mandatory. For companies that don't respond or respond only to some questions, the relevant information on the dashboard should be left blank, or filled in with a "best guess" or "most relevant public information" curated by Taskforce experts.
Public-facing communications
Taskforce members should utilize press conferences, official posts on the Taskforce's website, and editorials in addition to ai.gov.uk to educate the public about AI development and risks. Key topics to cover in these public-facing communications include:
Frontier AI development is focused on developing autonomous, superhuman, general agents, not just towards better chatbots or the automation of individual tasks. These are and will increasingly be AIs capable of making their own plans and taking action in the real world.
No one fully understands how these systems function, their capabilities or limits, and how to control or restrict them. All of these remain unsolved technical challenges.
Consensus on the so...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Priorities for the UK Foundation Models Taskforce, published by Andrea Miotti on July 21, 2023 on The AI Alignment Forum.
The UK government recently established the Foundation Models Taskforce, focused on AI safety, modelled on the Vaccine Taskforce, and backed by £100M in funding. Founder, investor and AI expert Ian Hogarth leads the new organization.
The establishment of the Taskforce shows the UK's intention to be a leading player in the greatest governance challenge of our times: keeping humanity in control of a future with increasingly powerful AIs. This is no small feat, and will require very ambitious policies that anticipate the rapid developments in the AI field, rather than just reacting to them.
Here are some recommendations on what the Taskforce should do. The recommendations fall into three categories: Communication and Education about AI risk, International Coordination, and Regulation and Monitoring.
Communication and Education about AI Risk
The Taskforce is uniquely positioned to educate and communicate about AI development and risks. Here is how it could do it:
Private education
The Taskforce should organize private education sessions for UK Members of Parliament, Lords, and high-ranking civil servants, in the form of presentations, workshops, and closed-door Q&As with Taskforce experts. These would help bridge the information gap between policymakers and the fast-moving AI field.
A new platform: ai.gov.uk
The Taskforce should take a proactive role in disseminating knowledge about AI progress, the state of the AI field, and the Taskforce's own actions:
The Taskforce should publish bi-weekly or monthly Bulletins and Reports on AI on an official government website. The Taskforce can start doing this right away by publishing its bi-weekly or monthly bulletins and reports on the state of AI progress and AI risk on the UK government's research and statistics portal.
The Taskforce should set up ai.gov.uk, an online platform modeled after the UK's COVID-19 dashboard. The platform's main page should be a dashboard showing key information about AI progress and Taskforce progress in achieving its goals, that gets updated regularly. ai.gov.uk should have a progress bar trending towards 100% for all of the Task Force's key objectives.
ai.gov.uk should also include a "Safety Plans of AI Companies" monthly report, with key insights visualized on the dashboard.
The Taskforce should send an official questionnaire to each frontier AI company to compile this report. This questionnaire should contain questions about companies' estimated risk of human extinction caused by the development of their AIs, their timelines until the existence of powerful and autonomous AI systems, and their safety plans regarding development and deployment of frontier AI models.
There is no need to make the questionnaire mandatory. For companies that don't respond or respond only to some questions, the relevant information on the dashboard should be left blank, or filled in with a "best guess" or "most relevant public information" curated by Taskforce experts.
Public-facing communications
Taskforce members should utilize press conferences, official posts on the Taskforce's website, and editorials in addition to ai.gov.uk to educate the public about AI development and risks. Key topics to cover in these public-facing communications include:
Frontier AI development is focused on developing autonomous, superhuman, general agents, not just towards better chatbots or the automation of individual tasks. These are and will increasingly be AIs capable of making their own plans and taking action in the real world.
No one fully understands how these systems function, their capabilities or limits, and how to control or restrict them. All of these remain unsolved technical challenges.
Consensus on the so...
Comments
In Channel



