New Week #112
Description
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 24,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
This newsletter is billed a mid-week update. Here, for once, is an instalment that arrives in the middle of the week.
In this edition? A new study suggests one third of US citizens would use safe and affordable gene editing to create more intelligent children.
Meanwhile, a prestigious London law firm wants to hire someone who can whisper sweet legalese to ChatGPT.
Let’s get into it.
🧠 Edit button
This week, a startling glimpse of a coming ideological battle. One that will force us to confront the very meaning of the word human.
New research reveals that almost one third of US citizens say they’d use gene editing to create a more intelligent offspring.
Published in the journal Science this week, the study asked respondents if they’d use embryo selection and/or gene editing technologies to create children who are smarter and more likely to get into a top-ranked college. The respondents were told to imagine that these techniques are free and safe (neither of which is currently true).
A full 38% said they’d use embryo selection. And 28% said they’d use gene editing.
The understated conclusions of the study authors (PGT-P refers specifically to embryo selection):
‘Our data suggest that it would be unwise to assume that use of PGT-P—even for controversial traits—will be limited to idiosyncratic individuals, or that it has little potential to cause or contribute to society-wide changes and inequities.’
In other words: gene-edited humans may be just around the corner, so get ready for some seriously weird and terrifying implications.
It’s just over ten years since the breakthrough — led by scientists Emmanuelle Charpentier and Jennifer Doudna — that brought us CRISPR gene editing. Last month Science ran a retrospective that also looked to what the next decade may bring:
As the Science retrospective made clear, we’re entering an era of CRISPR-fuelled medical interventions. The idea that we may one day engineer babies to be smarter — or physically stronger, or more creative — is no longer far-fetched.
And the data in this new study suggests many will embrace such a future. We should probably be talking more about what this means.
⚡ NWSH Take: Chinese scientist He Jiankui reemerged into the scientific community this week after a three-year spell in prison courtesy of the CCP. Speaking to the Guardian before an appearance in the UK, he conceded that he’d ‘acted too quickly’ when in 2018 he created the world’s first babies with edited genomes. His work prompted rapid and near-universal condemnation. But 28% of the US citizens surveyed by this study just said, in so many words: sure, I’d gene edit my baby if it meant she had a better chance of getting into Harvard. // You might counter that 28% is still a clear minority. But a world in which one in four babies — or even a fraction of that — are genetically engineered for greater intelligence is a world profoundly reordered. We’re some way from this kind of targeted genetic intervention right now. But the pace of innovation here, and the Science study, suggest we should start thinking about the implications. // What second and third order effects occur when, for example, an economic elite can access genetic engineering tech that others can’t? We talk a lot about the ways in which the internet created winner takes all models that made inequality worse. But what about this? It’s not enough simply to say we’ll outlaw these practises. Rich people will find a jurisdiction that caters to them: intelligence tourism. This newsletter will keep watching.
⚖️ Prompt justice
I’ve written a great deal across the last few months about generative AI. This week, a clear signal that the revolution is set to impact the real economy, and the professions, in myriad ways.
The prestigious British law firm Mishcon de Reya advertised for a GPT Legal Prompt Engineer:
‘With the release of ChatGPT signalling a new phase of widespread access to LLMs, we are looking to increase our understanding of how generative AI can be used within a law firm, including its application to legal practice tasks and wider law firm business tasks.’
The selected candidate will work with Mishcon lawyers to ‘design and develop high-quality prompts for a range of legal and non-legal use cases, working closely alongside our data science team.’
Last week I wrote on the way ChatGPT has sparked a war for the future of search. Amid that, it looks as though law firms are about to fight their own battle of the prompts.
⚡ NWSH Take: It’s not hard to imagine how LLMs will prove useful at Mishcon HQ. Case notes on complex trials can run to thousands of pages; now ChatGPT can summarise all that text in seconds. Meanwhile, think about the potential for the development and testing of arguments and counter-arguments. // The broader point here? There’s much talk of the ways in which ChatGPT and its offspring will automate away jobs and render human creativity obsolete. I suspect the reality will be more complex. And part of that reality? Prompt writing — that is, whispering to generative models in order to get the best outputs from them — is set to become a creative mode all of its own. Far from erasing writers, generative models are causing the emergence of a whole new form of writing; it’s about to be an amazing time for those with an aptitude for words. // Sure, it’s unlikely that writing prompts for Mishcon will be anyone’s idea of creative heaven. But this is just the start. New art forms will grow out of this new form of writing. How long, for example, until we see entire short stories that function as prompts for an LLM, so that the model can create an interactive world for the reader to explore? NWSH will keep watching — and may even launch an experiment or two of its own.
🗓️ Also this week
🤔 Users claim that Microsoft’s new ChatGPT-fuelled Bing search engine is becoming spiteful and rude. Feedback from the first wave of testers include responses in which the chatbot claimed to be sentient, and one in which it asked its user, ‘Why do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?’ I’m going on record here: I’m sceptical that some of these responses are real. I think Microsoft have some pranksters on their hands. Meanwhile, Microsoft permanently killed Internet Explorer this week, after 27 years of, let’s be honest, variable service.
🐁 Anti-ageing scientists used young blood plasma to extend the age of the world’s oldest lab rat. Scientists at US startup Yuvan Research say blood therapies of this kind may be able to ‘rewind the clock’ on human lifespan — but more evidence is needed.
🛒 Amazon’s CEO says the retail giant plans to ‘go big’ on physical stores. Speaking to the Financial Times, Andy Jassy said: ‘we’re hopeful that in 2023, we have a format that we want to go big on, on the physical side’. The company recently announced that it will lay off more than 18,000 workers.
💸 News aggregation and comment platform Reddit wants to IPO later this year. That’s according to technology publication The Information.
🙊 Audiobook narrators say they fear Apple is using their work to train synthetic voices. Some narrators say they have only just become aware of a clause in their contract that allows the tech giant to ‘use audiobooks files for machine learning training and models’. Back in New Week #110 I wrote about UK-based startup ElevenLabs and its eerily good text-to-voice model.
🪐 NASA’s Curios