New Week #128
Description
Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
One week until the Christmas break: where did 2023 go?
This week, DeepMind serve up proof that a large language model can create new knowledge.
Also, more news from the accelerating story that is the march of the humanoid robots. It’s clear next year will be a pivotal one for this technology.
And researchers hook up brain organoids to microchips to create a new kind of speech recognition system.
Let’s get into it!
🧮 Fun times at DeepMind
This week, yet another step forward in the epic journey we’ve taken with AI in 2023.
Researchers at Google DeepMind used a large language model (LLM) to create authentically new mathematical knowledge. Their new FunSearch system — so called because it searches through mathematical functions — wrote code that solved a famous geometrical puzzle called the cap set problem.
The researchers used an LLM called Codey, based on Google’s PaLM 2, which can generate code intended to solve a given maths problem. They tied Codey to an algorithm that evaluates its proposed solutions, and feeds the best ones back to iterate upon.
They established the cap set problem using the Python coding language, leaving blank spaces for the code that would express a solution. After a couple of million tries — and a few days — the mission was complete. FunSearch produced code that solved this geometrical problem, which mathematicians have been puzzling over since the early 1970s.
DeepMind say it’s the first time an AI has produced verifiable and authentically new information to solve a longstanding scientific problem.
‘To be honest with you,’ said Alhussein Fawzi, one of the DeepMind researchers behind the project, ‘we have hypotheses, but we don’t know exactly why this works.’
⚡ NWSH Take: For pure mathematicians, a solution to the cap set problem is a big deal. For the rest of us, not so much. But this result really matters, because it resolves a central and much-discussed question about LLMs: can they create new knowledge? // Until this week, many believed LLMs would never do this — they they’d only ever be able to synthesise and remix knowledge that already existed in their training data. But there was no solution to this problem in the data used to train Codey; instead, it created novel and true information all of its own making. This points a future in which LLMs solve problems in, for example, statistics and engineering, or can create new and viable scientific theories. // In other words, this little and somewhat nerdish research paper heralds a revolution. So far, only we humans have been able to push back the frontiers of what we know. It’s now clear that in 2024, we’ll have a partner in that enterprise. // For this reason and so many others, I’m increasingly convinced that an unprecedented socio-technological acceleration is coming. It’s been a wild year; things are about to get even wilder.
🤖 Like a human
A quick glimpse of two stories this week. Both point in one direction: the humanoids are coming.
Tesla released a new video of its humanoid robot, Optimus. The Generation 2 Optimus can do some pretty fancy stuff, including delicately handling an egg:
Meanwhile, researchers at the University of Tokyo hooked a robot up to GPT-4.
The Alter3 robot is able to understand spoken instructions and adopt a range of poses without those poses being pre-programmed into its database.
In other words, Alter3 is responding in real-time to natural spoken language; it’s an embodied version of GPT-4, best understood as a kind of text-to-motion model.
⚡ NWSH Take: The closing months of 2023 have brought a welter of humanoid robot news. Amazon are now trialling the Digit humanoid in some US fulfilment centres. The makers of Digit, Agility Robotics, are about to open the world’s first humanoid mass-production factory in Oregon. And the CCP says it plans to transform China’s economy via an army of these devices. Next year, then, will prove a pivotal one for the longstanding dream that is an automatic human. And Elon Musk wants Optimus to be the One Bot That Rules Them All. // The tricks we see Optimus performing in this new video are pre-programmed. But Tesla is building the world’s most capable machine vision AI via an unbeatable data set — funnelled to them from hundreds of thousands of on-road cars — and the world’s most powerful supercomputer for machine vision, Dojo. Agility Robotics stole an early lead by getting Digit inside Amazon warehouses. But longterm, it’s hard to see how anyone beats Optimus. // If humanoids are indeed imminent, some some big questions are looming. When humanoids outnumber people, says Musk, ‘it’s not even clear what the economy means at that point’. Next year, we’ll have to confront this prospect anew.
👾 Interface this
Also this week, some fascinating news on organoids and the future of human-machine interface.
Researchers at Indiana University Bloomington grew brain organoids — essentially clumps of brain cells — in a lab, and attached them to computer chips. When they connected this brain-chip composite to an AI system, they found it was able to perform computational tasks, and even do simple speech recognition.
Clips of spoken language were turned into electrical signals and fed to the brain-chip hybrid, which the researchers call Brainoware. The researchers found that the Brainoware was able to process these signals in a structured way and feed back signals of its own to the AI system, which decoded them as speech.
Lead scientist on the project, Feng Guo, says the result points to the possibility of new kinds of super-efficient bio-computers.
⚡ NWSH Take: Welcome to the weird — and somewhat terrifying — world of organoids. It’s only a week since I last wrote about them; they’ve become a NWSH obsession. I can’t understand why they’re not getting more attention; last year brain organoids taught themselves to play the video game Pong, ffs. // Okay, I’ve calmed down. We’re a long way from viable technologies here. Culturing brain organoids, and then sustaining them long enough and in large enough numbers to do anything useful, is extremely hard. But in the Pong story and this week’s Brainoware news we see a new form of human-machine interface blinking into fragile life. We see, too, a future in which we’re able to grow more computational power in the lab. This story is sure to evolve; I’ll keep watching.
🗓️ Also this week
🧠 Researchers at Western Sydney University say they’ll switch on the world’s first human brain-scale supercomputer in 2024. The DeepSouth computer will be capable of 228 trillion synaptic operations per second, around the same as that believed to take place in the human brain. The researchers say DeepSouth will help us understand more about both the brain, and possible routes to AGI.
⚖️ UK judges are now allowed to use ChatGPT to help them craft their legal rulings. New guidance from the Judicial Office for England and Wales says ChatGPT can be used to help judges summarise large volumes of information. The guidance also warns about ChatGPT’s tendency to hallucinate.
🌊 New research shows that frozen methane under ocean beds is more vulnerable to thawing than previously believed. Methane is a potent greenhouse gas; the researchers say the methane frozen under our oceans contains as much carbon as all of the remaining oil and gas on Earth. If released, this methane could significantly accelerate global heating.
🚗 Tesla has recalled more than 2 million cars after the US regulator found its Autopilot system is defective. The recall applies to every car sold since the launch of Autopilot in 2015. But this is a ‘recall’ in name only; Elon Musk says Tesla wi