Futuristic #38 – How LLM’s Think
Description
In this episode, Cameron and Steve dive into the rapidly evolving world of AI, discussing the latest advancements and their societal implications. They explore new AI voice features, the potential dangers and benefits of AI companions and agreeable AI personalities, and the philosophical debate around AI sentience and relationships. The conversation touches on AI’s role in business generation, the power of new models like OpenAI’s GPT-4o and Google’s Gemini 2.5, and the ongoing copyright debate surrounding AI training data. They also get into the complexities of how Large Language Models (LLMs) like Anthropic’s Claude actually “think,” the expansion of AI into hardware by companies like LG, Apple’s perceived lag in the AI race, and the future of AI integration in everyday tools like ebook readers. The discussion extends to advancements in open-source robotics, citing Nvidia’s initiatives, and contrasts technological progress and STEM education focus between China (highlighting Huawei) and the US. Finally, they touch on the intriguing and potentially controversial “Network State” concept championed by figures associated with Peter Thiel and Andreessen Horowitz, exploring the idea of tech-driven, independent city-states.
futuristicpod.com
FULL TRANSCRIPT
FUT 38 Audio
[00:00:00 ]
Cameron: So that was an official new voice from chat, GPT, which came out today called Monday. And it’s like a depressed goth girl or something, whatever. , which is now my official favorite voice. I don’t know, , if you found this, welcome back. This is futuristic episode 38, by the way, Steve Sammartino, , I dunno if you’ve found this, but, uh, I’ve been using advanced voice with GPT lately, and the voices have sounded increasingly excitable.
I was having a conversation in the car on the way to Kung Fu with GPT about Trump and Greenland and rare earth minerals. And I was like, I was saying, so hold on. Greenland is run by Denmark and Denmark’s and NATO country. So if Trump invades Greenland, [00:01:00 ] does NATO have to, does that, uh, is that Invoke Article five under the NATO treaty and then NATO needs to attack the United States and GP t’s like, yes, that would happen.
They probably would. And it would have to be, and it was all very excitable and it was, and I was like, can you sound less excited? And it would like, oh, okay, sorry. I’ll bring the tone down a bit. And a minute later it would be talking like this again. It would all be very excitable. Even Fox was sitting in the backseat.
He is like, can you just calm down a minute? Anyway, but I like this new depressed voice. That’s more my style.
Steve: call it apathy,
Cameron: I.
Steve: and I don’t think enough AI in modern society are apathetic,
Cameron: It reminds me of, was it Marvin in the Hitchhiker’s Guide to the Galaxy was the AI robot. I was like, I am so depressed, brain the size of a planet, and they asked me to pick up a piece of paper. I am so depressed.
Steve: Well, I think that the AI should be able to seamlessly switch [00:02:00 ] between. Levels of animation and emotion, right. Based on the context of the chat, because it understands it verbally with the language, it should be able to translate that in the audio sense. You would, one would think regardless of the voice that you choose, I.
Cameron: Yeah, and I was listening to, uh, an interview with Ezra Klein yesterday with Jonathan het, and Ezra Klein was talking about the fact that he’s concerned about the fact that a generation of kids are gonna be growing up with AI assistance that are completely agreeable with everything that they say, and that that’s not a good thing.
In the same way that social media hasn’t been a good thing for kids, AI that just agrees with them all the time to make them feel good is not gonna be a good thing. I was talking to Chrissy about it yesterday and I was saying that I expect when we get fully realized AI virtual [00:03:00 ] assistants that are on the devices that we give to our kids, we will have parental controls where we will be able to set up the AI personality that we want our children to interact with.
That say, listen, your job isn’t to just agree. Your job is to be. A caretaker, an educator, is to push back if they say something dangerous or stupid or that could be, um, referencing self harm or could be negative for their psychological or emotional health, you are to act as a therapist slash parental advisor slash tutor slash whatever.
Adults though, will probably get to choose the AI personality that they want, and I’m already telling GPT don’t agree with in my customer instructions. Don’t agree with me on everything. If I say something and it’s factually incorrect, or well, you think my interpretation of the facts is incorrect, I want you [00:04:00 ] to tell me that’s your job.
Push back, argue with me. You know, give me something to think about. But Chrissy said, and she’s probably right, most people won’t. Most people will just choose the AI personality type that just agrees with them all the time. ’cause that’s what they want is just validation that their ideas and beliefs are true.
What do you think?
Steve: I think the most dangerous. tool in the world right now, which builds on this is AI girlfriends. They are an absolute social disaster in the making. An imaginary girlfriend that you talk to every day that agrees with everything you say, think learns from you, has the same business model, is want you to keep coming back, is gonna tell a young teenage boy everything he wants to hear. It’ll eventually be a soft robot that he gets delivered from Amazon and he develops a relationship with. This is not good. Falling in
Cameron: Is it worse [00:05:00 ] than,
Steve: it’s ter,
Cameron: is it worse than having, is it worse than having incel running around with AR fifteens in the us?
Steve: it’s the same thing with a different product. Right. It’s
Cameron: Yeah, but
Steve: who don’t have real social interactions. An incel with an AR 15 or an incel with an AI humanoid robot, they’re the same thing, which is we don’t have real social interactions of people that disagree with us, that we learn social norms, that we interact, we give and take.
It’s the same thing, and they
Cameron: well, yeah, except they’re not going to.
Steve: a bunch of shot up people in,
Cameron: Well,
Steve: where I can go and buy a gun in Walmart.
Cameron: no, but look, I see the opportunity for problems, but I also know that loneliness is a huge issue in modern society.
Steve: so that doesn’t solve loneliness, Ca