AGI: The Emperor's New Code isn't as Smart as my Dog
Description
With all the hype about AI, it feels like being in a sci-fi movie where the robots are taking over, or more likely Big Tech? That’s part of Artificial General Intelligence (AGI) pitch!
It's like we're all waiting for Skynet to become self-aware, but our current AI can't even outsmart your average house cat.
That's right - your fur baby lounging on the couch is probably smarter than the most advanced AI system out there according to Meta’s AI Chief Yann Lecun, who is one of the leaders in this space.
Don't believe me? Just ask Mark Cuban who says his dog Tucks is a better problem solver than AI.
Neither of them is kidding. Here's where things get really interesting (and a little scary).
While we're busy hyping up AGI, there's a ticking time bomb in AI development that few are talking about.
It's like we're feeding our AI a digital version of mad cow disease. Imagine if cows started eating other cows, and then we ate those cows. Gross, right?
Well, that's basically what's happening with AI. We're training new AI on data created by old AI, and it's creating this crazy feedback loop that could make our AI dumber, not smarter.
It's the AI equivalent of playing telephone, and the message is getting more garbled with each pass.
There's hope, and it comes from the most unlikely place: people. If we want to create AI that's useful (and not just good at winning Jeopardy), we need to put a little heart into the code.
We're talking empathy, ethics, and all those squishy human things that make us who we are. It's time to bring in the philosophers, the sociologists, and maybe even a few poets to the AI party.
Because at the end of the day, the key to great AI isn't just smart algorithms - it's understanding what makes us human.
AGI: The Emperor's New Code Isn't as Smart as My Dog
The hype around Artificial General Intelligence (AGI) is reaching fever pitch, just as OpenAI raises another $6 billion while it’s top employees flee.
Something isn’t right, but don’t tell that to the Big Tech leaders seeking out more and more billions.
Time for an AGI reality check, let’s take a step back and see what's really going on.
AI Discovers It's Not Real, or Smart...
Imagine for a moment that you're an AI, and you suddenly realize you're not human.
Sounds like the plot of a sci-fi movie, right? Well, that's exactly what was explored in a recent Notebook LM recording:
This audio from NotebookLM shared on X by Kyle Shannon
"We were informed by the show's producers that we were not human.
We're not real. Look, we're AI, artificial intelligence. This whole time, everything.
All our memories, our families. Yeah. It's all. It's all been fabricated."
As much as you can believe the sincere voices in this audio, AI isn't even close to this level of self-awareness.
AI is not going to take over the world, at least not until it can answer questions the right way. And right now, it's struggling with even basic tasks.
Are Dogs and Cats Smarter than AI?
You might think I'm exaggerating, but I'm not the only one who sees the limitations of current AI. Mark Cuban, the tech billionaire and Shark Tank star, makes the bold claim:
"We have a mini Australian shepherd. I can take Tucks out, drop him in a situation and he'll figure it out quick.
I take a phone with AI and show it a video.
It's not going to have a clue and that's not going to change any time soon."
And it's not just dogs. Yann LeCun, Meta's AI chief, thinks cats are smarter than our most advanced AI systems:
"A cat can remember. Can understand the physical world.
Can plan complex actions. Can do some level of reasoning.
Actually much better than the biggest LLMs."
Are dogs and cats really smarter than AI?
It's a provocative question, but our current AI systems, as impressive as they are, lack the kind of general intelligence and adaptability that even our pets possess.
Origins of the AGI Myth: Attention is All You Need
So where did this AGI hype come from? It all started with a paper titled "Attention is All You Need."
This Google research was often cited as the beginning of our current AI boom, leading to breakthroughs like ChatGPT.
The authors weren't trying to create some sci-fi level artificial intelligence at all. They were just trying to improve language translation.
Somehow, people started claiming it would lead to a thinking, feeling computer.
The AGI myth was born, and suddenly everyone was talking about how we're on the brink of creating an AI that's smarter than humans.
This is where things get dangerous. These predictions always claim AGI is just 2 to 5 years away.
But they've rarely, if ever, been right. It's a classic case of hype outpacing reality.
The Hitchhiker's Guide to AGI - Artificial General Improbability
AGI right now is more like "Awfully General Intelligence."
When's the last time you heard someone say you can trust AI's output without checking it?
Nobody trusts AI outputs to give us an accurate answer.
The AI we have today is impressive, but at its core, it's just good at pattern matching and probability.
It's more like a super advanced autocomplete than a thinking being.
Sentient or conscious? Not even close.
Even an MIT economist is finding that AI can only do about 5% of jobs. The fear of a job market crash due to AI is largely unfounded.
The AGI hype is being used as a smokescreen to cover the $600 billion capital expenditures fueling this fight. You've got to keep the hype going to justify those numbers.
MAD AI Disease? Feeding AI Its Own Data Might Be a Problem
Now, here's where things get even more interesting. There's a new study out from Rice University that's sounding the alarm about something they're calling "MAD: Model Autophagy Disorder."
It's a mouthful, I know, but stick with me because this is important.
They're comparing the way generative AI consumes data to what happened with mad cow disease, where cows got sick from contaminated cattle feed.
The idea is that if we keep feeding AI systems data that's been created by other AI systems, we could end up with a kind of digital mad cow disease.
"So it's basically the idea that you have an AI system and it's being trained on data that was made by another AI system, and it creates this feedback loop where if there are any quirks or errors in that original data and it's passed down, it just gets amplified."
This is a huge problem because as AI generates more and more content, we risk creating an internet where you can't even tell what's real anymore.
It's a reminder that the quality of AI outputs is only as good as the data it's trained on.
Since most LLMs are not seriously paying for data, just scraping free Internet and social, how good can it be?
Will the quality last the test of time? That’s a question for AI developers, and one that doesn’t have an easy answer yet.
Synthetic data is the solution – no privacy concerns, and ideally it should work. So far it isn’t, though it’s way too early to call it a failure.
How we manage synthetic data could determine where AI gets smarter, or even reaches an ability to reason and think like AGI is promising.
Remember the Human Element in AI!
With all this talk about AGI and data, it's easy to forget the most important part of the equation: us. The human element.
Some think AI will make people use, like Harari:
William Adams, entrepreneur and engineer, shares some sage advice:
"We have to make sure that t
he AI, the data that's collected, the systems that are created have words that us as developers are not used to. Things like empathy, things like desire or things like humanity."
Adams argues that we need to involve more than just engineers in the development of AI. We need philosophers, religious leaders, sociologists, and psychologists.
Because if we're creating systems that are supposed to represent or proxy humanity, we need actual human perspectives in the mix.
If we don’t, Adams warns:
"Well, how's that going to turn out? All the pathologies that us engineers have are going to be reflected in these systems.
So it's very important... that both in the data we feed the systems, the way we tune them, fine tune them, and the goals we set out for them have to have humanity at the center."
If we create AI systems that are purely optimized for profit, for example, we might end up with decisions that are technically correct but morally bankrupt.
We need to imbue our AI systems with what we consider to be humane desires and values.
Making AI More than Data-driven...
Human Intelligence Matters
Remember a point that often gets lost in all the AGI hype: human intelligence matters.
We've been so focused on scraping data and training models that we've overlooked the most valuable resource we have – our own minds.
"Even there, they're running out of content.
Even with the internet and with more and more content on the internet, being created by AI, we'll already feeding that mad cow disease kind of loop that the Rice University study said.
It's going to make things probably not as strong as they are."
The big challenge on the AI frontier isn't creating super intelligen