AGI (General Artificial Intelligence), Myth or Reality?
Description
Whereas Ed Zitron is castigating the major Tech players responsible for the peak of inflated expectations surrounding AI, many tech pundits are still touting that AGI (Artificial General Intelligence) is within reach. To find out if AGI is a myth or a reality, I interviewed J.G. Ganascia, a long-time AI researcher and philosopher. In the course of our discussion, I gathered that the singularity and AGI weren’t the same thing. This interview set a lot of the record straight, particularly regarding the notions of intelligence and sentience or consciousness. But its striking conclusion is undoubtedly that, like Ray Bradbury, we should certainly be less wary of pseudo-intelligent AIs, let alone AGI, than of the wily intelligent humans behind these technologies.
General Artificial Intelligence (AGI), Myth or Reality?
<figure class="wp-caption aligncenter" id="attachment_81519" style="width: 1456px;"><figcaption class="wp-caption-text" id="caption-attachment-81519">In J.G. Ganascia’s opinion, it is absolutely essential to retain control over the machine. And be wary not of artificial intelligence, but of the people behind it. An advice previously delivered by Ray Bradbury in his time – Image produced with Midjourney, whom I trained not to show robots. But in this instance, it was hard to avoid. At least this one is under human control, which is reassuring…</figcaption></figure>
The Singularity, AGI and Superintelligence
J.G Ganascia. Transhumanism led to many projections about artificial intelligence, of which the technological singularity was one of the avatars. There are others today like Nick Bostrom’s Superintelligence.
<figure class="wp-caption alignnone" style="width: 1220px;"><figcaption class="wp-caption-text">The singularity, AGI and Superintelligence are very different notions. Above, the cover of the book on Superintelligence by Swedish author Nick Bostrom</figcaption></figure>
But these terms are not interchangeable.
The singularity, technological dream or nightmare
JGG. The technological singularity is an idea from the 1950s. It claimed that at some point machines would become as powerful as humans, causing a shift in human history.
This meant that at some point, machines would have taken over. Either they would overtake us completely, at which point humanity as we know it would disappear. Or humanity would submit to the power of machines, and humans would become their slaves.
<figure class="wp-caption aligncenter" id="attachment_81532" style="width: 1000px;"><figcaption class="wp-caption-text" id="caption-attachment-81532">J.G. Ganascia at an AI conference in Paris in March 202. AGI isn’t on the agenda according to him. But beware of those pulling the strings.</figcaption></figure>
Another possibility was that we grafted ourselves onto machines and downloaded our consciousness onto computers, and that this consciousness could then be reincarnated onto robots. According to this theory, we could then continue to exist beyond our biological bodies. This is what I described in a novel written under the name Gabriel Naëj, this morning, Mum was uploaded (in French only).
This is the story of a young man whose mother decides, once deceased, one should download her consciousness and reincarnate her as a robot. What is very disconcerting for this young man is that she has chosen the most beautiful body possible, that of a sex robot!
AGI and superintelligence
JGG. What we call AGI, Artificial General Intelligence is a different kettle of fish. It’s the idea that, with current artificial intelligence techniques, there are specific human cognitive functions that can be mimicked by machines, and that one day we’ll be able to emulate them all.
It means there is a way of deciphering intelligence, and that once we find it, it opens up infinite possibilities. In essence it’s a gateway to superintelligence. The very principle of the technological singularity assumed that there was a general intelligence and that all cognitive capacities could be emulated by machines.
General intelligence isn’t quite on par with the technological singularity and at the same time suggests it’s the ultimate goal. AGI has nothing to do with downloading human consciousness, though. this is just the ability to build machines with very high intellectual power.
This ties in with Nick Bostrom’s plans for superintelligence, which focuses on the day when the intelligence of machines is greater than that of humans.
There are links between these concepts, but they’re not quite the same thing.
As of 2024, is the singularity still a myth?
JGG. The early science fiction writers who mentioned the technological singularity, including Vernor Vinge, predicted that it would happen in 2023. Now, clearly, it’s not here yet. Unless we’ve all already been downloaded onto machines without knowing…
And yet these AIs are amazing!
JGG. Artificial intelligence has made considerable headway. Machines are capable of mastering language to the point where, when asked a question, they generate texts that are well formulated, even though not always relevant.
We can also produce images of people that bear an uncanny resemblance to real humans. Videos too. It’s all very intriguing.
Until now, one thought that language was first and foremost a matter of grammar, then syntax and vocabulary. Now we are realising that these linguistic abilities can be reproduced with just a few probabilities.
It’s really exciting from an intellectual point of view.
But that doesn’t mean that the machine will suddenly take over, or that it will have a will of its own. It doesn’t even mean that it will tell the truth.
These AIs almost write like humans. Most of the time their content is based on common knowledge. But sometimes this “common knowledge” is a little absurd. And as soon as you shift the situation a little, they produce results that are completely wrong. I’m often playing tricks on them with logic puzzles and I’m having great fun as they fail.
It’s understandable , in fact, because that’s not what they were made for. They are just made of modules capable of selecting words based on probabilities.
Yann Le Cun is dead against GenAI, yet he believes in AGI. Are you prepared to change you mind about the subject?
JGG. Absolutely not! I think there’s a misunderstanding regarding the meaning of the term ‘intelligence’. Besides, artificial intelligence is a scientific discipline.
What AI does is stimulate different cognitive functions. What are they? Perception, reasoning, memory (in the sense of processing information, not storing it) and communication. We have made considerable progress in these areas.
Take perception, for example. AI is capable of recognising an individual out of hundreds of thousands, whereas we ourselves can’t always remember the people we met a day before. These performances are extraordinary.
But where there is a misunderstanding when one states that the machine will be more intelligent than man. Intelligence is a set of cognitive abilities. It may well be that each cognitive capacity is better emulated by machines than by humans. Yet, that doesn’t mean that machines will be more intelligent than us, since they have no consciousness.
Machines do not “see” things nor have a will of their own. In any case, consciousness is the crux of the problem.
There’s another meaning for the word ‘intelligence’, which is related to ingenuity or inventiveness.
An ingenious or clever pupil is said to be ‘intelligent’ because he or she can solve everyday life or mathematical problems. Are machines more clever than we are, though? It depends. There are some cases, of course, where they outdo us. We’ve known for a very long time, 25 years now, that machines play better chess than we do. More recently so for the game of Go. Thus, from that point of view, of course, they are more intelligent, but that doesn’t mean they’re better th