Joshua Bengio: Reasoning through arguments against taking AI safety seriously (my take-aways)
Description

Read University of Montreal Professor and Turing Award Winner JOSHUA BENGIO‘s AMAZING JULY 7, 2024 POST HERE (the graphics , stats and images were added by me). If this floats your boar be sure to join me Thurs/Fri this week for my livestream sessions on AGI:)
This is brilliant stuff – here are my key high-lights (I know… that's a lot… sorry!)
THE STAKES ARE THE BIGGEST – EVER. “The (AGI) issue is so hotly debated because the stakes are major: According to some estimates, quadrillions of dollars of net present value are up for grabs, not to mention political power great enough to significantly disrupt the current world order. I published a paper on multilateral governance of AGI labs and I spent a lot of time thinking about catastrophic AI risks and their mitigation, both on the technical side and the governance and political side. In the last seven months, I have been chairing (and continue to chair) the International Scientific Report on the Safety of Advanced AI (“the report”, below), involving a panel of 30 countries plus the EU and UN and over 70 international experts to synthesize the state of the science in AI safety, illustrating the broad diversity of views about AI risks and trends. Today, after an intense year of wrestling with these critical issues, I would like to revisit arguments made about the potential for catastrophic risks associated with AI systems anticipated in the future, and share my latest thinking”
“The most important thing to realize, through all the noise of discussions and debates, is a very simple and indisputable fact: while we are racing towards AGI or even ASI, nobody currently knows how such an AGI or ASI could be made to behave morally, or at least behave as intended by its developers and not turn against humans“
<figure class="wp-block-image size-large">

THE COORDINATION PROBLEM: “In addition, even if the way to control an ASI was known, political institutions to make sure that the power of AGI or ASI would not be abused by humans against humans at a catastrophic scale, to destroy democracy or bring about geopolitical and economic chaos or dystopia would still be missing. We need to make sure that no single human, no single corporation and no single government can abuse the power of AGI at the expense of the common good. We need to make sure that corporations do not use AGI to co-opt their governments and governments using it to oppress their people and nations using it to dominate internationally. And at the same time, we need to make sure that we avoid catastrophic accidents of loss of control with AGI systems, anywhere on the planet. All this can be called the coordination problem, i.e., the politics of AI. If the coordination problem was solved perfectly, solving the AI alignment and control problem would not be an absolute necessity: we could “just” collectively apply the precautionary principle and avoid doing experiments anywhere with a non-trivial risk of constructing uncontrolled AGI…” MORE
<figure class="wp-block-image size-large">

THE AI ARMS-RACE IS COMING: “As of now, however, we are racing towards a world with entities that are smarter than humans and pursue their own goals – without a reliable method for humans to ensure those goals are compatible with human goals. Nonetheless, in my conversations about AI safety I have heard various arguments meant to support a “no worry” conclusion. My general response to most of these arguments is that given the compelling basic case for why the race to AGI could lead to danger, and given the high stakes, we should aim to have very strong evidence before concluding there is nothing to worry about”
CONSCIOUSNESS IS NOT A REQUIREMENT FOR REACHING AGI: “Consciousness is not necessary for either AGI or ASI (at least for most of the definitions of these terms that I am aware of), and it will not necessarily matter for potential existential AGI risk. What will matter most are the capabilities and intentions of ASI systems. If they can kill humans (it’s a capability among others that can be learned or deduced from other skills) and have such a goal (and we already have goal-driven AI systems), this could be highly dangerous unless a way to prevent this or countermeasures are found”. Read the whole thing.
<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1680" height="945" src="https://futuristgerd.com/wp-content/uploads/2024/07/State-Of-The-Art-AI-Performance-Gerd-Leonhard-Futurist_Blue-1680x945.png" alt="" class="wp-image-143045" srcset="https://futuristgerd.com/wp-content/uploads/2024/07/State-Of-The-Art-AI-Performance-Gerd-Leonhard-Futurist_Blue-1680x945.png 1680w, https://futuristgerd.com/wp-content/uploads/2024/07/State-Of-The-Art-AI-Performance-Gerd-Leonhard-Futurist_Blue-450x253.png 450w, https://futuristgerd.com/wp-content/uploads/2024/07/State-Of-The-Art-AI-Performance-Gerd-Leonhard-Futurist_Blue-768x432.png 768w, https://futuristgerd.com/wp-content/uploads/2024/07/State-Of-The-Art-AI-Performance-Gerd-Leonhard-Futurist_Blue-1536x864.png 1536w, https://futuristgerd.com/wp-content/uploads/2024/07/State-Of-The-Art-AI-Performance-Gerd-Leonhard-Futurist_Blue-18x10.png 18w, https://futuristgerd.com/wp-content/uploa