With the digital age constantly evolving, there is no end to the doomsday scenarios arising from the use of AI. And while the general public opinion remains that it is an existential threat to human life, Nobel laureates and big-name celebrities have issued another red alert emphasizing just how dire the threat is. They signed a statement specifically calling for the banning of superintelligent AI systems, or ASIs, until experts reach a consensus that the technology can be developed safely.
But what actually is superintelligent AI?
“What underlies all this is this idea that’s driven all the progress in AI, that really intelligence is all about information processing,” said Max Tegmark, head of the Future of Life Institute and a professor at MIT researching AI. “And even though our brain is very good at it, it is fundamentally a biological computer, and it’s clearly possible to build other kinds of computers that are just vastly smarter than us in every way. That’s what artificial superintelligence is.”
Tegmark joined The World’s Host Marco Werman to discuss what artificial superintelligence is.
Marco Werman: So, I mean, it is a quantum leap beyond AI itself, correct?
Max Tegmark: Today’s AI is already better than us at speaking 200 languages, extremely good at math, and, in the last four years, has gotten very good at reading and writing. But, it is still ways below human level in certain other tasks. Depending on who you ask, they think it’s going to take two, five or 10 years. … I’m very humble about how long it’s going to take, but people have kind of stopped saying it’s going take 100 years or 50 years. Almost all of the leaders in the field think it is going to happen in our lifetime.
We may have qualms about ChatGPT and its cousins; we might be critical of it, but I don’t think many of us are at the point of being afraid of it. What makes superintelligence not just problematic, but threatening in a very literal way?
Because intelligence gives power. If you go down to the zoo and you ask yourself, “Who’s in the cages? Is it the humans or the tigers?” The answer obviously is that the smarter species put the less smart one in the cages. We dominate Earth, right? We dominate the tigers and everyone else, not because we’re stronger, but because we’re smarter. We were smarter than the Neanderthals; that’s why things didn’t go so well for them.
So, Alan Turing, the godfather of our field, picked up on this in 1951 and said that if we ever build a new species of superintelligent robots, the default outcome is [that] they are in charge. Which raises the question, why would we do that then, right? What most people instead want are AI tools that — by definition — we can control. Tools like [for curing] cancer, tools that eliminate traffic accidents. But nonetheless, that’s not what these companies are talking about. What are they talking about? They’re talking about superintelligence. That they actually want to build this effectively new, smarter species.
A lot of people don’t like that. And the decision to bring in a new, smarter species to Earth, which is sort of equivalent to letting us get invaded by aliens from space, who are superior in their tech. It’s not something that should be left to just a bunch of unelected tech nerds in San Francisco who are drinking too much Red Bull. This is something everybody needs to have a say in.
As to the big tech companies, why would they want to do this? Why would they want to give the smarts away to artificial superintelligence?
That’s a really good question. What they usually tell me is, when I talk [to them], I’ve known all of these CEOs privately for many years, and they’ll usually privately say to me things like, “It’s kind of inevitable this is going to happen, and it’s just so dangerous that only I can do it safely. I have to do it.”
You mentioned just the diversity of people who have signed this letter. I mean, we also see [former US President] Barack Obama, Richard Branson and former Irish President Mary Robinson, all concerned that ASI could displace jobs, pose ethical risks, etc. How could it actually become uncontrollable, though? Can you walk us through how that happens or what it might look like?
It’s pretty obvious that from the perspective of a lion, humans are uncontrollable. There’s nothing they can do to prevent us from ultimately putting them in cages or hunting them, right? Because we’re just so much smarter than them. The lions can’t predict exactly how we’re gonna trick them or they don’t even understand how a rifle works, right? But that doesn’t prevent us from dominating the lions. If you’re the North American black rhino, wondering how these weird little things with two legs and two arms drive us to extinction when we’re stronger than them, they wouldn’t understand it, but we did actually drive them to extinction within the last few decades, right?
So, I think the catch here is to not think of superintelligence as just a new technology like the internet, but to actually think of it as a new robot species. Regardless of whether we’re extinct or not. When I play with my two-year-old son, I want him to have a meaningful future, even if he doesn’t get driven to extinction. I don’t want them to be economically obsolete. And I think this is the key reason we’ve seen such a diverse set of signatories. You know, a lot of faith leaders feel that human dignity is just really important. We have a choice. We can either build machines, robots and other intelligent systems that are tools for us that help us, empower us … or we can build robots and other systems which just directly compete with us.
If superintelligence gets out of hand, could humans just pull the plug on it?
No. It’s just like if an alien space fleet comes and invades us, or even if someone is firing a heat-seeking missile at your airplane, right? It has AI in it. Would you feel I’m just going to unplug the heat-seeking missile? No, of course not.
Back to the notion that many big companies, like Meta, are chasing artificial superintelligence. Do they have an argument for why we need this technology?
Sam Altman, for example, the CEO of the world’s most powerful AI company, wrote this article saying that his vision for what should happen is [that] we should merge with machines. But you know, when I play with my two-year-old son, Leo, I don’t like the idea that some tech bro in San Francisco decides that my son is going to merge with machines. What if he doesn’t want that? You know, what if my wife and I don’t want that? This is an incredibly radical thing to just force on us all undemocratically.
I mean, if artificial superintelligence could soon outthink, outplan and outmaneuver any human-made safeguards, is more regulation and oversight even possible at this point?
Well of course, because we don’t have superintelligence yet, even though we’re getting closer. This is the perfect time to put these safeguards in place. You might think it’s inevitable that any tech that can [generate] a lot of profit and power will be built, but that’s obviously not true if you look at historical facts. If I could do human cloning on you, make a million clones of you and also tweak your DNA so that you were even better at some stuff, can you imagine how much money I could make off of that, right? Yet, after [much] discussion in the ’70s, scientists and the general public concluded that this was just disgusting. This could cause us to lose control over our species. We just didn’t want it. It got super stigmatized and eventually banned. And how many human clones have you met on the street today? Zero, right? We’ve stopped a lot of very profitable tech before. We can do it again.
This interview has been lightly edited and condensed for clarity.
The post Nobel laureates sound the alarm over artificial superintelligence appeared first on The World from PRX.