How Apple's Vision Pro & AI Will Forever Change Friendship
Description
Dear Friends,
(With an audio version read by a real human, me, above.)
I’m indulging in an intermission this week from the Millennial midlife series because, as of yesterday’s Apple event, I am convinced that we’ll look back at 2023 as the year that changed everything. My prediction is that we’ll look back at the 2020-2022 pandemic with faint memories of baking sourdough and a mere prologue to the year that sci-fi arrived and the very notion of humanity changed. A lot has been written about AI’s existential threat and effects on jobs, but I haven’t seen a thorough analysis of how it might transform the way we relate to one another. And this week’s newsletter certainly is thorough — one of the longest I’ve written — so I’ll preview my thinking before you commit to 15 minutes of reading or listening:
* We underestimate how much has changed over the past 20 years and we forget how rudimentary today’s technologies felt when they first came out. Compared to the last two decades, we should expect 10x more techno-socio-political change over the next 20 years.
* Until a few months ago, I thought that virtual reality and augmented reality were losing bets. Then I started using Character.ai and now I think that the next generation of kids will have more (and deeper) relationships with AI friends in VR/AR spaces than with their human friends in real life. (I know, sad.) Already we have to compete with phones to get the attention of our loved ones; soon we’ll have to compete with charismatic, attentive, funny, perfect AI friends.
* I used to think of my daily journaling practice as leaving a record of reflections and memories for my future self. Now, I think about it as training an immortal AI version of me that will last forever. It’s really weird.
* Interspecies love isn’t just possible; it’s normal. (Ask my dog.) Also, all relationships are a little manipulative and a little co-dependent, especially with our future AI friends.
* If we can’t compete with AI friends, can we at least inspire a new Romantic Movement? Also, can artificial intelligence and augmented reality help us become better friends with real-life humans?
You could argue that all I do in this piece is describe a world that science fiction writers have been warning us about for decades. And that is largely my point: This is the year that science fiction became non-fiction.
We underestimate the last 20 years
Facebook/Meta turns 20 next year. When the iPhone turned 15 last year, the Wall Street Journal made an adorable mini-documentary about “How Apple Transformed a Generation.”
“Try to remember life before the iPhone,” it dares us. 20 years ago practically all of our social interactions were offline and we never spent more than two minutes a day looking at our phones. Ezra Klein encourages a thought experiment: Imagine that you time-travel back to 1970 and tell someone that you will invent a tiny device that will offer you the sum of all human knowledge. You can look up any question, any person, any scientific paper and it’s immediately available to you. Now, imagine then telling that same person that you will invent a tiny device that will distract the mind and make us more vain, polarized, and distrustful. Of course, both of those inventions came true, except that they were a single invention.
The web + social media + smartphones changed everything. And yet, what I want to emphasize for this newsletter is just how unimpressive it all was at the start. Facebook was an online directory, Instagram was a way to make your grainy digital photos look even older, and Twitter was blogging but with fewer features. The first iPhone couldn’t record video, didn’t have apps or GPS, and took a solid minute to load a website. The way we use our phones today was a leap of imagination in 2007 when Steve Jobs famously announced three products (a mobile internet browser, an mp3 player, and a phone) that turned out to be one.
How do you define intelligence? And when is it artificial?
I want to get to why I think that it will be difficult for human friends to compete with AI friends, but first I need to tackle that most discomfiting question: How do we know that the way humans think is different from the way machines think? And do we have non-religious language to describe the difference? I wade into some of the academic debate here, so feel free to skip ahead to the next section.
In a thought-provoking interview with Cade Metz, the so-called Godfather of AI, Geoffrey Hinton makes the distinction between an unwise and unfortunate decision. Hinton says that his decades of work to model software on the structure of the brain was not unwise, but has turned out to be unfortunate. He worries that AI will flood us with misinformation, displace meaningful work, and lead to Terminator-like robot soldiers.
But AI skeptics like Gary Marcus ask: Why do we call chatbots “intelligent?” All they do, after all, is predict the next string of text based on the last string of text. That is not intelligence, they argue, but just statistical correlation. Emily Bender and her co-authors claim in an influential paper that AI chatbots are merely “stochastic parrots” — which is to say they just repeat things at random and we eagerly assign meaning to their randomness. There is a section of their 2021 paper, “Coherence in the Eye of the Beholder,” which tries its damndest to distinguish between human-to-human communication and computer-to-human communication. They argue that only human-to-human communication is “jointly constructed” with “shared common ground” and “communicative intent.” Text generated by AI chatbots, on the other hand, “is not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind. It can’t have been, because the training data never included sharing thoughts with a listener, nor does the machine have the ability to do that.”
I want to agree with this, but I’m just not convinced. The more I think about it, the more I’m swayed by Sam Altman’s view that we are all so-called stochastic parrots; that we all construct what we’re going to say next based on what we have seen and heard in the past. There is nothing special or unique about how a human communicates with another human versus a computer. In the end, it’s all just inputs and outputs. “What makes you so sure I'm not ‘just’ an advanced pattern-matching program?” asks Matt Yglesias, and I have yet to find a persuasive response.
I guess VR has a future after all
We knew that Apple has long been developing an AR/VR headset even before 2019 when Kevin Kelly published the Wired cover story, “AR Will Spark the Next Big Tech Platform.” I was sure that VR would be a flop: who would choose to wear an expensive headset to play chess when you could play in a park? Why ride a virtual bike instead of the real thing? Why put on a headset to pretend you’re in a movie theater instead of going to a movie theater? In our increasingly tech-skeptical society, I was sure that VR was a losing bet. And sure enough, sales of VR hardware have been underwhelming despite the billions of dollars of investment.
But then I started playing around with Character.ai, which lets you interact with AI-based “characters” — each with their own communication style and personality. Beyond interacting with existing AI characters, you can create your own character by training it on text. You can chat with Donald Trump or Ricky Gervais or Samantha, the AI virtual assistant/girlfriend from the SciFi movie Her.
Character.ai was co-founded by two AI engineers who left Google to launch their own startup. In an interview with the Washington Post, co-founder Noam Shazeer explained that they were frustrated by Google’s conservative approach to AI: “Let’s build a product now that can help millions and billions of people. Especially in the age of covid, there are just millions of people who are feeling isolated or lonely or need someone to talk to.”
It is tempting to poke fun at Shazeer and anyone who uses Character.ai as a way to “socialize,” but spend just a few minutes














