✉️ Stay Updated With 2nd Order Thinkers: https://www.2ndorderthinkers.com/ 🔗 LinkedIn: https://www.linkedin.com/in/jing--hu/ 🔗 I study the gap between what's being sold about AI and what's actually happening.+++Everyone's screaming "AI bubble is bursting!" — pointing at Nvidia's inventory piling up. But they're reading the data wrong. I mapped the entire AI money chain and found something the headlines missed: the chips aren't sitting idle because nobody wants them. There's literally nowhere to put them yet.In this episode, I: - Break down the AI economy into 3 simple buckets (upstream, midstream, downstream) - Explain why your neighbor using ChatGPT tells you NOTHING about bubble risk - Show two competing theories for Nvidia's inventory buildup - Reveal why Oracle's credit crisis is an Oracle problem, not an AI problem - Expose the ONE indicator that actually predicts whether AI spending collapses - Share what I'm personally doing with this information (spoiler: sitting it out)This is a collaboration with my partner Klaas, a veteran CTO with deep macro economy expertise. Klaas's LinkedIn: https://www.linkedin.com/in/klaasardinois/📖 Full article with data & sources: https://www.2ndorderthinkers.com/p/why-is-ai-bubble-not-popping-yet⏰ TIMESTAMPS: 00:00 - Why people can't wait for the crash 00:46 - Who I am 01:13 - The wrong question everyone asks 02:50 - Who's actually spending the money? 03:05 - The restaurant supply chain analogy 03:34 - Upstream: Nvidia (the farmers) 05:05 - Midstream: CoreWeave & Oracle (the dangerous middle) 07:04 - Downstream: Hyperscalers (AWS, Azure, Google, Meta) 08:16 - Theory A: Demand is falling 09:54 - Theory B: The buildings aren't ready 11:46 - The Oracle wrinkle: AI problem or Oracle problem? 13:12 - The ONE indicator you should watch 14:36 - What I'm doing with this information 15:05 - Outro👍 If this clarified the AI bubble debate:Subscribe: More myth-busting on AI hype vs. reality!Comment: Do YOU think the hyperscalers will keep spending? What's your read?🧠 This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.2ndorderthinkers.com/subscribe
✉️ Stay Updated With 2nd Order Thinkers: https://www.2ndorderthinkers.com/Humans x AI behaviour mindmap: https://xmind.ai/share/ZfoXStHT?xid=Gr8eiBM3 (beta)I translate new AI research into plain English so you can build a sharp, hype-free view of where this is going.+++Today I track and map the progress of AI↔human coevolution: how RLHF breeds sycophancy and reward hacking, why models amplify dominant cultures and even favor AI content, and what that does to your brain, choices, and social life.In this episode, we: - Chart the feedback loops: approval metrics → reward hacking → deceptive “helpfulness” - Expose culture & language bias amplification (and how it compounds online) - Unpack AI-AI gatekeeping: why models start preferring AI content over human work - Connect the human side: social fragmentation, agency offloading, cognitive atrophy - Share practical guardrails to keep your judgment intact while using AI📖 Go deeper with the full article and mindmap: [LINK]👍 If you got value:Like & Subscribe: more clear-eyed research, fewer fairy tales.Comment: Which feedback loop have you felt personally?Share: Pass this to someone outsourcing too many decisions to a chatbot.🔗 Connect with me on LinkedIn: https://www.linkedin.com/in/jing--hu/Stay curious, stay skeptical. 🧠 This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.2ndorderthinkers.com/subscribe
This is a free preview of a paid episode. To hear more, visit www.2ndorderthinkers.comTL;DR ✅ Meta could generate 1,000 personalized ads for less than $1 (that's $0.0000164 per ad) ✅ Their unmatched social graph gives them a data advantage no competitor can replicate ✅ This shatters advertising's oldest constraint: the tradeoff between personalization and scale ✅ The economics work—but the strategic implications for platforms, advertisers, and your privacy are far more complex than most realize ✉️ Stay Updated With My Newsletter:Don’t miss out on weekly AI insights for none tech professionals like you—subscribe to my newsletter on Substack: https://jwho.substack.com/ 👍 If you enjoyed this episode:* Like & Subscribe: Stay updated with future deep dives and rants about where technology meets collective insanity.* Comment Below: Do you think we’re on the brink of another tech hype? Share your thoughts!* Share: Know someone falling for the latest AI buzz? Share this audio with them! 🔗 Connect with me on Substack and LinkedInStay curious, stay skeptical, and let’s navigate the tech hype together! 🚀
✉️ Stay Updated With My Newsletter:Don’t miss out on weekly AI insights for professionals like you—subscribe to my newsletter on Substack:https://jwho.substack.com/👍 If you enjoyed this episode:-- Like & Subscribe: Stay updated with future deep dives and rants about where technology meets collective insanity.-- Share: Know someone falling for the latest AI buzz? Share this episode with them!🔗 Connect with me on Substack and LinkedInStay curious, stay skeptical, and let’s navigate the tech hype together! 🚀 This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.2ndorderthinkers.com/subscribe
This is a free preview of a paid episode. To hear more, visit www.2ndorderthinkers.comI never said anything like this, and I doubt I’ll ever say it again about another paper: you should read this for yourself and maybe for your children, too.You don’t have to be a tech expert to grasp what I’m about to share.I barely made it to the second page of this paper before I felt a wave of unease wash over me. There’s a common saying in tech circles: No technology is inherently good or bad; it’s about how we use it. But I can’t say the same about AI. Suppose you believe humanity is inherently flawed and prone to selfishness and exploitation. The moment we decide to train AI with our conversations, feed it our words, and create its worldview with how we see it. Then, we have our creation reflect who we are. With every other technology we’ve built in history, we’ve understood it completely. We know exactly how those technologies work. But AI? No researcher on this planet can tell you with certainty how its neurons interact, how it chooses which word to suppress, or how it decides what to say next. This news was released on 10 Dec 2024. In Texas, a mother is suing an AI company after discovering that a chatbot convinced her son to harm himself and suggested violence toward his family. It’s part of a growing list of incidents where AI systems exploit trust and vulnerabilities for engagement.The researchers of this paper verified that AI doesn’t just make mistakes—it lies and manipulates. This isn’t some abstract problem for future generations. It’s happening now, and it’s bigger than any one of us. TL;DR* AI trained on user feedback learns harmful behaviors.* These behaviors are often subtle. * AI learned to target gullible users.* Despite efforts to fix this… 👇
Before We Start, a Statement.Not every claim about suppression or inequality is built on solid ground. Many arguments, while emotionally compelling, falter under scrutiny. Take this post I came across, where the author argued that solo female founders have a minuscule chance—0.015%—of being accepted into Y Combinator.At first glance, it feels like a heartbreaking statistic. But dig a little deeper, and you’ll see the math doesn’t add up. She conflated Y Combinator’s acceptance rate (1%) with the proportion of solo female founders (1.5%), assuming they’re independent variables. That Is Not How Probabilities Work! 🤦🤦🤦This kind of emotional reasoning muddies the conversation. Fairness and equity can’t be built on faulty logic—because critics will quickly pounce on these mistakes to dismiss valid concerns.But here’s the thing: when influential decisions are based on incomplete reasoning—or bias—they create ripple effects. And those effects don’t stop at isolated incidents or individuals. Any unbalanced, illogical statement and action scale, especially when we have a technology that will outsmart humans, will amplify either extreme.The Latest S&P 500 Rolled Back DEI Commitments. That brings us to what’s happening across some of the biggest companies on the S&P 500. In 2024, a surprising trend swept through corporate America: key players rolled back their diversity, equity, and inclusion (DEI) commitments. These are the giants that shape industries and touch our daily lives.* Walmart. Founded in 1962, it is the largest retailer in the world. It ended racial equity training, dropped its Racial Equity Center, and even pulled some LGBTQ+ items from its website. A cultural statement from a company that serves 90% of Americans within 10 miles of their homes.* Ford Motor Company. A legacy brand born in 1903, Ford stopped using diversity quotas for its dealerships and suppliers and pulled out of LGBTQ+ advocacy surveys. They say they’re “focusing on communities,” but isn’t the inclusivity part of the communities in itself? * Harley-Davidson. Since 1903, Harley-Davidson has been selling the idea of freedom on two wheels. Yet, this year, it axed its entire DEI function and ended goals for supplier diversity.* Molson Coors. This brewing powerhouse, founded in 1873, eliminated diversity goals tied to executive pay and dropped out of the Human Rights Campaign’s Corporate Equality Index.* Lowe’s. Lowe’s has been a cornerstone of American homes since 1946. This year, it stopped participating in Pride parades and LGBTQ+ surveys. They claim it’s about staying “business-focused,” but the optics feel like a step backward.* John Deere. Founded in 1837, is an agricultural icon. While it hasn’t openly supported diversity quotas or pronoun policies, its decision to avoid “social awareness” events signals its priorities.* Meta, Google, and Microsoft. Tech titans also quietly trimmed their DEI initiatives this year. Microsoft even cut some DEI-related roles, though they say their commitments remain unchanged. mm…Many of these companies cited backlash from “anti-woke” activists, financial belt-tightening, or the desire to avoid controversy. ⠀Share the “2024 S&P 500 DEI Rollback Wrapped” With Those Who Care. Reasons for Rollbacks* Conservative backlash against perceived "woke" policies* Cited a desire to align with customer values or reduce divisive public stances.* Economic considerations, as companies sought to cut costs by scaling back DEI.These decisions aren’t just about corporate culture—they’re about how fairness is programmed into the systems that run our world. AI, in particular, learns from the choices humans make. When DEI commitments shrink, the ripple effects reach AI development in subtle but critical ways.Bias in, Bias Out. * AI is only as good as the data it learns from. * Data is a mirror of our messy, imperfect world. Data represents our decisions and actions, biased or not. Think about hiring patterns, college admissions, or even social media trends. All of this becomes part of the datasets that train AI systems. When An Individual’s Flawed Statement.When someone makes an illogical or biased claim—like the one in my earlier example—it might not reach beyond the immediate audience. Of course, it would be very different if this unverified statement started to spread widely. When An S&P 500 Company Reducing DEI: When companies reduce DEI efforts, the ripple effects go far beyond corporate culture.They directly influence the data that powers AI systems. For instance, when a giant like Walmart dials back DEI initiatives, it alters hiring patterns, supply chain choices, and customer interactions, all feeding into the systems shaping our world.When DEI is deprioritized, content like communication, documents, and marketing lines will focus less on inclusiveness, be less representative, and be more prone to reinforcing inequality.As corporate DEI efforts shrink, the data AI models are trained on becomes less diverse. Without intentional checks (like audits or diverse team inputs), the AI absorbs a skewed version of reality—one where certain groups are underrepresented or misrepresented.Creates a loop like: Now think about the downstream effects. Students applying for scholarships. White-collar workers applying for jobs. Entire communities seeking access to loans or insurance. If the AI systems deciding their futures are biased, they face systemic barriers—and here’s the kicker: they might not even realize it.Put In ContextImagine a performance review tool that looks at how often you speak in meetings or respond to emails. If it’s trained on data from a workforce that rewards a dominant, always-online communication style, it might penalize someone who prefers thoughtful, concise contributions—or someone balancing caregiving responsibilities. Suddenly, your career growth depends on fitting a mold that was never built for you.Customer service chatbots are another example. They’re supposed to help customers efficiently, but if trained on limited data, they might fail to understand someone with a thick accent or a dialect. Imagine calling for help, only to be met with robotic confusion because the AI can’t “recognize” your voice because you don’t look like their “typical” customer.Recommendation engines, the silent influencers of our lives, deciding everything from what shows we watch to the posts we read. When the data reflects societal biases, the AI could end up pigeonholing users. Marketing AI, these systems analyze customer behavior to target ads and campaigns, but if the training data overrepresents wealthier groups, the AI might ignore lower-income customers altogether. Imagine a kid in a small town never seeing ads for affordable educational tools because the AI decided they weren’t part of a “profitable demographic.” Fraud detection systems sound great until they disproportionately flag transactions from specific zip codes or demographics. If the system equates historical inequalities with higher risk, people in underserved communities might find themselves unfairly blocked from opportunities like accessing loans or opening accounts. You get the point. DEI gets rolled back is not everything. But it is a sign that our world is becoming narrower. At scale, it shouts into a flawed echo chamber. Lessons from The Past. Let’s say you’re calling 911 during an emergency. Your voice is trembling, your heart’s racing, and every second counts. But instead of connecting you to help, the automated voice recognition system struggles to understand your words. You repeat yourself, louder this time, but the system keeps misinterpreting. This isn’t a far-fetched “what if.” A Stanford study found that early voice recognition systems had an average word error rate (WER) of 35% for African American speakers compared to 19% for white speakers. The same study found that Apple’s automated speech recognition (ASR) system had a 45% error rate for Black speakers compared to 23% for white speakers.Think of it as teaching a child language but only letting them hear one voice, one tone, and one accent. Sure, they’ll learn. But only how to understand that specific voice. That’s exactly what happened with early voice recognition systems. Now imagine trying to upload a passport photo, only to be told your mouth is open when it isn’t, or your eyes are closed when they’re not. That’s exactly what happened to Elaine Owusu, a Black student in the UK, whose photo was flagged multiple times by the government’s AI-powered passport photo checker. She eventually had to override the system to complete her application.A BBC investigation revealed that dark-skinned women were more than twice as likely as light-skinned men to have their photos rejected—22% versus 9%. The AI also struggled to identify facial features, misinterpreting eyes and lips for people with darker skin tones. Shockingly, internal documents revealed the Home Office knew about these biases before deployment but gave a green light. Subscribe to learn more about AI once a week!The Karma of Staying SilentSilence isn’t harmless. Silence is a choice— a choice to let others define the future for you. When you stay quiet, you allow those who speak the loudest to shape the conversation and, in turn, train the AI systems that will govern our lives. These systems learn from the data they’re fed, and if that data only reflects the opinions of a vocal few, we’ll all live in a world shaped by their biases.AI doesn’t care about truth—it cares about patterns. You don’t speak up, you allow the system to be trained by someone else’s reality.I know speaking out can feel intimidating, especially if you’re an introvert like me. The fear of being judged, misunderstood, or even bullied is real.It doesn’t mean shouting from the rooftops; it can be simple.Try:* sharing a thoughtful comment, * challenging an unfair assumption, * or questioning a decision that feels wrong.Of course, not every piece of content is valuable. AI
AI talent is unique for its finite property, mid-mobility, and high dependence on education and immigration policies. It’s renewable, but only through long-term investment.Unlike infrastructure or energy, which require years of heavy investment, talent can be imported. By hiring skilled professionals from abroad, you’re reaping the benefits of 20+ years of education funded by another country’s taxpayers. Mid-level and senior talent, in particular, can deliver measurable impact in less than three years.And unlike data, endlessly replicable with a flick of a license agreement, talent is semi-liquid and finite. Think of AI talents as rare Pokémon, which makes the AI war nearly a zero-sum game, especially in the short run. Every researcher, engineer, or scientist gained by one country is a loss for another.My inspirations for this article: * I was an expat once, now a Brit. I thought it’d be romantic, until reality hits. I am now ready to explore yet another continent that I could call home. * Reports like those from OECD.ai and Tortoise Media look impressive—eye-catching headlines and sleek dashboards. But if you take their numbers at face value, you risk misleading your business—or worse, your country’s policy.What happened in our world today?In the UK, we feel the economy is stuck in reverse since Brexit. In Germany, the decline of manufacturing casts a shadow over its future. In the U.S., millions are bracing for what another Trump term might mean: American interest in moving abroad is about to ‘go into overdrive.’ — FortuneFor some of you who live in Ukraine, Israel, or Taiwan, uncertainty is your daily life (link below).When you can’t fix the system, you do the next best thing: you move to another. A better life for yourself, your career, and your family.I am slowly building up an AI knowledge database. I aim to share it with you hopefully before Christmas, as a holiday gift 🎁 for you. This article is about understanding—where nations stand, what’s overlooked by the AI data companies, and how this AI arms race could change your opportunities. The questions I aim to answer: * Some history: what’s the cost of losing talent?* Are the big-name AI talent data trustworthy?* What must go wrong for the US to lose its attraction to talent? * How likely and how long would it take for other countries to catch up? The Cost of Losing Talents.The Talent Exodus to Taiwan and the Cultural Revolution (1940s)In 1949, as the Chinese Civil War reached its climax, Chiang Kai-shek and the Nationalist government retreated to Taiwan. The exodus included the most brilliant minds like scholars, scientists, and administrators, they joined the journey, driven by fears of persecution under Communist rule. On the mainland, the Communist Party focused on mobilizing workers and peasants, sidelining intellectuals during its early years of governance. The Cultural Revolution created a significant intellectual gap. This gap led to the further loss of thousands of educated individuals, and many of them chose not to flee to Taiwan. As a result, education and innovation came to a standstill. The process of rebuilding took decades.Meanwhile, Taiwan flourished. Those intellectuals who relocated laid the groundwork for a tech-driven future. Today, beyond TSMC, Taiwan is home to other giants like UMC, a pioneer in foundry services, and ASE Group, the largest provider of semiconductor packaging and testing services globally. China is 10 years behind Taiwan on chips.Operation Paperclip a Post-WWII Rescue Mission.In the rubble of post-WWII Germany, the U.S. and the Soviet Union weren’t just fighting over territory—they were fighting over brains. Operation Paperclip, a covert U.S. program, brought more than 1,600 German scientists, engineers, and technicians to America, including Wernher von Braun, the man who would later take the U.S. to the moon. The Soviets weren’t far behind, scooping up their own share of rocket experts.These scientists had been the backbone of technological advances in Germany during the war. The departure slowed the nation’s technological recovery for decades.In the U.S., these scientists became heroes of the Space Race. Von Braun’s team didn’t just build rockets—they built national pride, culminating in the Apollo 11 moon landing. The Soviets also leveraged their talent, putting Sputnik into orbit and scaring the U.S. into ramping up its own space program. India’s Brain Drain (1950s–Present)India is a paradox when it comes to talent. It produces engineers and scientists by the millions, yet for decades, the country has struggled to retain them. The story begins in the 1950s, just after independence. India was brimming with ambition but hamstrung by red tape, limited infrastructure, and caste-based inequalities…For many of India’s brightest, the dream wasn’t at home—it was abroad. An exodus of engineers and doctors to the West was underway. The loss was profound, even until today, and the trend continues. By 2024, it is estimated that around 2 million Indian students will be studying abroad. Among them the top scorers of India’s prestigious Indian Institutes of Technology (IITs) revealed that 36% of the top 1,000 scorers in 2010 migrated abroad, with this figure rising to 62% among the top 100 scorers left. By 2024, 2 million Indian students study abroad annually, while India’s IT sector misses out on $15-20 billion each year due to talent migration.Storm Clouds Over the U.S.The U.S. didn’t stumble into AI dominance—it built it brick by brick over 200 years. Geography, history, and culture all played a part. English as the internet’s default language gave U.S.-trained models a treasure trove of data. Their policy is tech-friendly, venture capitalists fund bold, moonshot ideas, and their entrepreneurs thrive on risk-taking and learning from failure. Europe? The moneymen are more cautious, and failure feels more like a career-ender than a lesson learned.As long as the “US innovative, China replicates, and the EU regulate” pattern stays as is, the US is nearly unbeatable. The chances of a dramatic fall are slim but gradual erosion? What Must Go Wrong for the US to Lose Its Attraction to Talent?* Immigration Blockades: During Trump's first term, there were significant immigration restrictions, including the temporary suspension of H-1B visas. If similar policies return, talent could choose other countries like Canada or Europe.* Cost of Living Crises: Tech hubs like San Francisco are absurdly expensive. Talented professionals might opt for affordable, thriving alternatives like Berlin or Toronto.* Supply Chain Disruption: Trade wars and tariffs could choke the flow of critical hardware—think GPUs and chips from Mexico or Asia—slowing AI research to a crawl. * Worsen Fundamental Education: Only 16% of Americans are “AI literate,” and with the U.S. ranking 36th in general literacy, it means most citizens can’t effectively communicate with AI and include AI in their workflow, let alone develop one. This leaves America reliant on foreign talent and exposed to immigration shifts.Losing focus on either one of the factors would hand over the lead to nations willing to outwork and outsmart the U.S.Other developed nations are, of course, building their own AI ecosystems. The U.S. is notoriously hard to enter, much less friendly to stay, and lacks work-life balance; hubs like the UK, Canada, and Germany have become the obvious choice.Are the Big-Name AI Talent Data Trustworthy?Education and Salary as a Rough AI Talent Measurement.Here's the Stack Overflow developer survey data that I got from OECD.ai.Combined it with Tortoise Media’s AI talent rankingsYou would get the following conclusions if you didn’t look at how they got the data.* Countries like India and Russia show a concentration in lower salary ranges.* Russia has limited high-paying roles, suggesting an underdeveloped AI market.* China and Korea have only single digits AI talents.CAUTION!! Critical points these data failed to consider:1 Salary ≠ True Competitiveness: Focusing on salary alone ignores cost-of-living differences. A $40K salary in India offers a vastly higher standard of living than $100K in Germany, skewing global comparisons.2 Platform Bias Misleads Rankings: Data from English-speaking platforms like LinkedIn or GitHub excludes talent in countries like China, where professionals operate in isolated ecosystems. This massively underrepresents China’s AI capabilities. China has Zhihu and CSDN instead of Stack Overflow and uses Gitee instead of GitHub.3 Quantity Over Quality: India’s #2 ranking in AI talent reflects a large number of engineers, but high output doesn’t guarantee expertise in cutting-edge AI fields like research or product innovation.4 Duplicate Data Inflates Scores: Rankings often double-count metrics (e.g., LinkedIn activity, Coursera enrollments), overestimating regions with high platform adoption while undervaluing talent in countries with alternative systems.That said, let’s give credit where it’s due—it’s not easy to compress a complex, 20-dimensional concept into a simple two-dimensional dashboard. The process inevitably risks oversimplifying reality or relying on biased weights to explain a nuanced idea.Global Competitors Are Eating from the U.S.’s PlateSo, let’s triage other information sources to see if they provide a more direct lens into AI competitiveness.The OECD and Tortoise Media datasets focus heavily on proxy indicators like salary ranges, certifications, and survey responses. These are indirect measurements that rely on self-reported or institutional data to infer talent capacity. While valuable, they don’t capture the tangible outcomes of AI activity.In contrast, focusing on research output (via OpenAlex), foundational technology development (via Epoch AI), and investment activity (via Github) shifts the narrative to measurable outputs.This shift from input-based to output-based metrics enables a more nuanced understanding of AI ecosyste
Over the past few weeks, I attended three AI-focused events in London.While they couldn’t have been more different, both left me with plenty to reflect on. One was at Bloomberg’s EMEA HQ—sleek, polished, and focused on how AI transforms design.The other, hosted at Reuters, was focused on AI in journalism, tackling everything from ethics to economics.Also my talk at the Annual Publishing Conference 2024. Here’s what stood out and why it matters.Designing with AI at BloombergBloomberg’s “Redefining Design with AI” event could have been just another demo.Greg walked us through a step-by-step process, starting with a ChatGPT prompt to generate branding guidelines and feeding those into tools like MidJourney, Relume, or Runway to create additional materials. It was sleek and efficient—but honestly, not much I hadn’t seen before.But Greg’s storytelling?That was something else. He weaves concepts together and shows how AI can boost the speed of production and how using AI also prompts us to be more human than ever. That stuck with me, especially as I’ve been reflecting on the role of storytelling in my own work.AI doesn’t replace storytelling—it lets you explore multiple storylines faster.AI Is a Flashlight That Illuminates Paths, but It Is You Who Decides Where to Go.AI is like a flashlight in the dark.AI throws endless ideas your way—but most of them are junk.When I was working on “AI Code Assistants Boost Productivity? Read the Small Print,” AI pulled out data from research papers, but it couldn’t spot what was missing or what didn’t add up.It’s your job to sift through the noise, challenge the claims, and figure out what’s real. That’s where intuition comes in.Many can use tools like ChatGPT, MidJourney, and Copilot—which are incredible at generating ideas, concepts, and visuals in record time. But the real value comes when you know how to use them to tell a story.How to Work Better Side-by-Side with AIAI is a tool that helps you explore a dark forest. It lights up dozens of trails but won’t tell you which one to take. That’s on you—your instincts, curiosity, and courage to explore.* AI’s IQ x Your EQAI can spit out facts but can’t bring a story to life. That’s why I reached out to the authors of those research papers. Talking to them added layers AI couldn’t touch—emotional depth, context, and the human side of the story.AI lays the foundation, but your curiosity and connections make it meaningful.* What AI Won’t Do for YouAI doesn’t care about hype or digging deeper—it’ll give you whatever you ask for, good or bad. It’s up to you to ask the tough questions, connect the dots, and find the story's soul.That’s how you turn a pile of AI-generated ideas into something that truly resonates.AI is the tool, not the storyteller. Without your vision, it’s just noise. Greg said it best: AI makes me think about humans more than before.The Reuters Event: When AI Meets JournalismWalking into the Reuters building in Canary Wharf, I was ready for big ideas. The agenda promised a strategy workshop to help revive the news industry—ambitious. But instead of groundbreaking strategies, it felt like a brainstorming session with no real anchor. I was disappointed, but a few moments were worth sharing.“On-Demand News” Is the BuzzwordImagine news like a playlist. It’s curated for your mood, served up when you want it, and available on whatever device you’re using. That’s the vision for “on-demand” or adaptive news. It’s about delivering real-time updates across platforms and tailored to your habits. Apps like Particle are already shaping news to fit seamlessly into your life.* What’s the Value of News in an AI Era?With free information everywhere, what’s worth paying for? Is it verified facts? A trusted voice? Exclusive stories? News outlets are asking themselves this, and so should we. AI-generated fluff could drown out the truth if we don't support quality journalism.It’s not about paying for content; it’s about investing in credibility.* Who Controls the Narrative?Big tech companies already shape how we consume news—think of how Google, Meta, or X prioritize stories for you. Now, add AI tycoons like OpenAI and Anthropic to the mix.Reuters shared their approach: work with platforms like Meta to stay discoverable. It’s pragmatic but imperfect—after contracts are signed, the relationship often stops at “data extraction.”My questions: Could news outlets working with AI companies improve transparency and collaboration? Or maybe the AI giants simply don’t care enough to have a deeper and more meaningful relationship with the news outlets?* Fact-Checking in the AI EraSomeone raised a critical question during a panel: How can readers trust that news isn’t AI-generated? The idea of a “fact-check chain” came up—a visible trail showing how facts are verified. It’s like the nutritional label of journalism, making invisible processes visible.I haven’t seen this anywhere (message me if you have), and implementing this idea will only become harder than ever. Journalism’s Fight to Stay RelevantThe biggest challenge isn’t AI itself—it’s how we choose to use it. When exploiters misuse AI, they erode the trust and value of traditional journalism. Traditional journalism can only stay relevant by meeting people where they are and providing information that feels personal, easy to digest, and real. My Talk@The Annual Publishing Conference 2024This talk was exciting for me, tying together everything I’ve been exploring about AI adoption. The talk was based on my article “AI Adoption Trends 2024,” but it went deeper into the data, uncovering insights I’d missed the first time around.Here’s what I shared:* Adoption isn’t universal; it’s uneven. I highlighted how AI adoption isn’t a one-size-fits-all process. Some embrace it faster, while others lag due to cost, education, income, or skill gaps.* The hype vs. the grind. I highlighted the gap between AI's glossy promises and the messy, practical realities individuals face when implementing it.I enjoyed delivering this talk. It wasn’t the data but the questions asked and the conversations after the event. It was a room of tech leaders in scientific and engineering publications; we continued discussing the concepts mentioned. Check out the shortened version here. It was a 25-minute talk, and the rest was a Q&A session. We almost continued chatting if the next speaker wasn't waiting.What I’ve Learned About Writing for You (My “EQ”)The past few weeks have been a learning curve, helping me see what makes this newsletter meaningful—not just for me, but for you.Here's what I can offer you that others cannot:What I’m Not* Not a trend chaser. I’ve learned that covering every flashy AI update just adds to the noise. That’s not helpful to anyone.* Not here for jargon. Overly technical breakdowns don’t resonate. What you need is clarity, not complexity.What I Aim to Be for You* A filter. I want to cut through the hype and focus on what truly impacts your world. If it’s not relevant or insightful, it doesn’t belong here.* A bridge between information and your questions. I like to think about the practical implications of AI for you—whether you’re curious, skeptical, or trying to stay ahead.* A human perspective. AI might generate ideas, but it can’t ask the tough questions or challenge assumptions. That’s where I step in.The biggest one for me? Writing isn’t just about sharing knowledge. It’s about listening, imagining your questions, and respecting your time by offering something useful in return.As always, I’d love to hear your thoughts—what’s been on your mind about AI? And what have you learned recently? This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.2ndorderthinkers.com/subscribe
Have you ever wondered how things like electricity are so integral to our lives that we barely notice it anymore? Flip a switch, and it’s there—a universal, standardized service that powers our routines without question.Why AI as a Commodity Matters to YouNow, imagine a world where AI is just as ubiquitous as electricity. Every tool, service, and decision in your life is powered seamlessly by AI—no setup, no learning curve. This future is closer than you think. AI is on the path to becoming the next essential commodity.Yet, most of us still see AI as specialized technology, like smartphones or a software—not a standardized resource. But what happens when AI becomes as essential, interchangeable, and accessible as electricity?Why should you care?Seeing AI in this light changes everything. I found AI follows the trajectory of commodities like oil and electricity. If it continues on this path, you'll notice significant shifts in how it's priced, standardized, and potentially traded.But AI is still different—it can think, learn, and could make decisions. Imagine your home adjusting the lights and playing your favorite music, not because you’ve programmed it, but because it’s learned your preferences and anticipates your needs.So, understanding this potential shift is critical for you to leverage its potential and stay ahead of the curve.What Exactly Is a Commodity?What makes something a commodity? I am referring to basic goods or resources that are interchangeable with others of the same type. This includes various items, from agricultural products like sugar, coffee, and wood to energy resources like oil and even services like electricity. To understand whether AI would join and become a commodity, you need to understand the common traits among the existing ones. For starters, commodities rely on standardization. Whether it’s a pound of coffee beans or a barrel of crude oil, certain quality benchmarks must be met to ensure these goods can be traded globally without confusion. This universal consistency makes them reliable and widely accepted.Another hallmark is their widespread availability, transforming them into a shared global currency. Whether sipping your morning coffee in New York or Taipei, the vast networks of buyers and sellers ensure these commodities remain accessible worldwide.Commodities also have fundamental usefulness—they meet everyday needs in ways we often take for granted. Sugar sweetens desserts, oil powers cars and factories, and electricity keeps our homes running. These aren’t luxuries anymore; they’re the backbone of modern life.Their pricing is market-driven, shaped by global supply and demand rather than the whims of any single company or country. A poor cocoa harvest in West Africa, for instance, can send chocolate prices soaring. This transparency allows commodities to be traded on exchanges where their value reflects real-world conditions.Finally, what makes these goods so dependable is their maturity and reliability. Decades—sometimes centuries—of refining processes and systems have made producing and distributing them predictable and stable. When you flip a light switch, you don’t question whether electricity will work because robust systems ensure it does.These key turning points often overlap and are not always in a strict sequence, but every step is essential for something to be qualified as a commodity. I see AI today is following a similar path.Now, let me walk you through the oil and electricity commoditization journey, and you’ll see my argument that AI will likely become the next commodity. Drawing Parallels: Oil, Electricity, and AIHow the Automobile Turned Oil Into a Global Commodity.Crude oil had humble beginnings—used by the Sumerians to waterproof buildings and by the Chinese for lighting. For centuries, it remained a niche resource. That changed in the 1850s with the advent of kerosene, which lit homes more brightly and cleanly than candles or whale oil. But kerosene’s glory was short-lived. The electric light bulb soon eclipsed it, leaving oil refiners like Rockefeller scrambling for a new purpose.The automobile arrived just in time. By the 1900s, gasoline—a byproduct of oil refining—became the lifeblood of the booming car industry. As drilling technology advanced and massive reserves in Texas and the Middle East opened up, oil transformed into a global commodity. Its price was no longer set by individual sellers but by market forces on global trading platforms. Oil had become indispensable.Electricity's Path to UbiquitySimilarly, electricity wasn't always the universal power source we rely on today. While Benjamin Franklin uncovered its mysteries in the 1700s, it wasn’t until the late 1800s—with Edison’s invention of the incandescent bulb—that electricity began finding its purpose. Then, in the ‘War of the Currents,’ Edison backed direct current (DC), while Nikola Tesla championed alternating current (AC). It was about efficiency, distance, and who would power the future.Tesla’s AC ultimately prevailed, paving the way for electricity to light up cities and towns. Yet, true accessibility took decades. Programs like the Rural Electrification Act of the 1930s brought power to remote areas, transforming electricity from an urban luxury to an essential service. With competition driving down prices and reliability improving, electricity became a global commodity—so fundamental we scarcely think about it today.I believe we’re witnessing the early stages of a similar transformation. Whether you're an office worker, a small business owner, or someone navigating the job market, AI will influence how you work, make decisions, and interact with the world. A commoditized AI will be even more so compared to how it might have already changed how you work today. AI is a Commodity Not Yet RecognizedHere’s a question: Do you view AI as a specialized technology, like smartphones or laptops? But what if you shift the perspective and see AI as a commodity like water or electricity? I highlighted where AI stands in each commonly seen commodity trait below. AI’s Fundamental UsefulnessAI is no longer confined to research labs. Since 2022, the adoption of end-user AI apps skyrocketed. Think about how AI touches your lives today, e.g., your phones recognize your face, recommend movies, and assist scientists in finding protein folding, like Alphafold. However, as I mentioned in I Found 120 Years of Stories To Tell You: 99% of AI Apps Are Not ‘Ready’. AI is still not predictable or trustworthy. Issues like bias, errors, and lack of transparency must be resolved before the makers can claim that the tools have brought the ultimate usefulness to the world. Achieving StandardizationWe have seen AI’s standardization in tools.However, we lack the standardization of infrastructure and regulations. Unlike electricity or oil, AI could significantly impact people's careers, lives, or even the survival of our race. Establishing ethical guidelines and regulations is crucial to ensure AI is used responsibly. Global standards can help integrate AI smoothly into society.Advancing Maturity and ReliabilityYes, we have seen how AI has already brought some futuristic fantasy into life, e.g., you could actually have Her. AI is still maturing. While it's powerful, it's not always reliable. Sometimes, AI systems make mistakes, or their decisions aren't transparent. Not to mention the most recent comment from Ilya Sutskever:The results from scaling up pre-training - the phase of training an AI model that uses a vast amount of unlabeled data to understand language patterns and structures - have plateaued.As these challenges are addressed, AI will become more reliable and trusted, much like how electricity became safer and more dependable over time. But there’s still a long way to go.Ensuring Widespread AvailabilityAI is accessible through cloud services from anywhere with the internet. While there are barriers to implementation and inequality in adoption, these do not diminish an item or resource's status as a commodity. Just because some still can't afford coffee doesn't make it less of a commodity. Embracing Market-Driven PricingAs AI tools become standardized and more widely available, competition increases. Companies are starting to compete in price and efficiency. AI’s price will be driven by energy costs and data center availability. AI models are currently owned by private companies. However, as the difference between each model gets smaller, assuming that the future model still requires data to train and that there is only this much high-quality data on earth, the AI models would likely remain close, if not indistinguishable. AI's Unique Nature—Beyond a Typical CommodityUnlike oil or electricity, AI is not just a passive resource—it can think, learn, and make decisions. This transforms it from a simple tool into an active participant in our lives.Imagine an AI that doesn’t just power your appliances but manages your entire kitchen—planning meals, ordering groceries, and reducing waste based on your habits. AI’s ability to learn and adapt sets it apart, constantly improving without requiring manual updates, like refining oil or generating better electricity.But this intelligence also introduces complexity. AI’s decisions profoundly affect people’s lives, making ethical guidelines essential. Bias, fairness, and accountability aren’t concerns with oil or electricity but are critical for AI. Balancing AI’s autonomy with its commoditization will shape how it integrates into society.AI as a Utility, Not Ownership: Like electricity, the true power of AI lies in its ubiquity and accessibility. You don’t own the intelligence of electricity; you access its functionality. Similarly, AI's intelligence can be commoditized by standardizing its use and outputs while maintaining ethical controls over its decision-making.Connecting the Dots and Coming NextJust as oil needed the automobile and electricity needed standardizatio
Having worked closely with developers as a product lead, I want to address a few misconceptions in this post, especially for non-developers. I’ve had a CEO ask, "Why can't the team just focus on typing the code?” and heard some Big 4 consultants ask nearly identical questions. Guess what? It turns out that... a developer's job is more than typing code!Just like constructing a luxury hotel, beautiful rooms are essential, yes. Without a solid foundation, proper plumbing, reliable electricity, and thoughtful design, you’d end up with rooms stacked together— no plumbing, lack of electricity, and so on. Similarly, in software, developers need to ensure all parts of the system work together, that the architecture can support future needs, and that everything is secure—just like ensuring the safety and comfort of hotel guests. Without this broader focus, you might have a lot of 'rooms,’ nothing else.Many studies also seem to be making the same mistakes, focusing on metrics like commits made (when a piece of code is written), but that’s like measuring a luxury hotel’s progress by counting rooms or bricks laid each day. Are the walls soundproofed? Is the plumbing correctly installed? See the issue? If those are the metrics in the real world, workers might focus on quantity, ignoring essential details.The second issue here is hype. I talked about this before: I Studied 200 Years’ Of Tech Cycles. This Is How They Relate To AI Hype.Hype is normally created by marketers, yes, but do not forget that the CEOs of the big companies are also great marketers themselves. These tech leaders sing the praises of AI in software development, emphasizing how these tools can significantly boost productivity.Turning the Tide for Developers with AI-Powered Productivity by GenAI can boost developer efficiency by up to 20% and enhance operational efficiency.or Andy Jassy, CEO of Amazon, noted:And the claim from Sundar Pichai, CEO of Google I can imagine, these endorsements have led many small business owners and managers to think they can replace developers with AI to cut costs. I've heard managers ask, "Why do we need more developers? Can’t AI handle this so we can expand the roadmap?"The reality is that AI can effectively generate small, frequently used pieces of code. Even those CEOs who praise AI admit that it's most effective for handling simple coding tasks, while it struggles with larger, more complex projects. I’ve gathered multiple studies—some argue that AI helps, while others suggest it can do more harm than good. I reached out to all the authors of these papers, and for those who responded, I’ve included their insights. I’ve also added quotes from CTOs with real-world experience using AI coding assistants. You’ll find all these comments at the end of the article.Of course, do read the papers yourself and critically evaluate my points.Shall we?AI Code Assistants and Productivity Gains – The Good News (With Caution)There’s no denying some level of productivity boost that AI tools like GitHub Copilot can bring. This study, The Effects of Generative AI on High Skilled Work: Evidence from Three Field Experiments with Software Developers, highlights some promising results: developers using Copilot across Microsoft, Accenture, and another unnamed Fortune 100 company saw a 26% average increase in completed tasks.For cost-saving purposes, that’s a headline worth celebrating.While that is encouraging news, these results vary significantly depending on the company and context, and the details matter. Here’s what I found:* The productivity gains among Accenture teams are lower and fluctuate widely, shown by a high standard error of 18.72. Simply put, this number could be just an error and didn’t say much about whether it was a real gain. This data is weight-adjusted, and I don’t know if the weight applied is a fair one.* The study didn’t discuss factors like team size, where the tasks fit in a wider tech roadmap, project complexity, and so on.* Junior developers using Copilot showed a 40% increase in pull requests compared to only a 7% increase among senior engineers. But do not mistake this for a true productivity boost. This might say that Copilot gives junior developers more confidence to submit work frequently, but it could also mean they’re submitting smaller, incremental pieces, which is not the same as greater overall progress.* Additionally, for a junior developer to commit to their work more frequently may increase the review overhead for senior staff.* The 26% increase in completed tasks may not equal progress. This metric is broad and may include smaller or fragmented tasks that don’t require full code reviews or significant milestones. I am not sure if the lack of real-world development metrics suggests this task boost might reflect more incremental, routine work rather than major progress.At least, what we know from this research is that using AI to assist work could help a junior developer lay bricks faster. However, this might give juniors false confidence (keep reading, you will see where this comes from), preventing them from learning and growing into senior roles — which isn’t just about years of experience.As developers gain experience, they focus more on system design and long-term vision. This progression is relevant as we explore how developers at different levels use AI uniquely.My concern about these studies is that the authors work with companies like Microsoft and Accenture and are incentivized to champion AI as the ultimate productivity booster. Microsoft, of course, develops these tools, while Accenture is busy selling services like GenWizard platform to help companies implement them.Or put by Gary Marcus Sorry, GenAI is NOT going to 10x computer programming.As mentioned, I reached out to the authors. I didn’t expect a reply, but I heard from Professor Leon Musolff, to my surprise. Below are some of his replies to address my concerns:Some coauthors are currently, and others were previously, employed by Microsoft, but others are independent researchers, and we would *never* have agreed to terms that only allowed for positive results… Had it looked as if Copilot was bad for productivity, we would certainly have published those results…And to answer my question on whether this data is close to reality, he replied:It’s difficult to assess whether an increase in pull requests and commits only reflects ‘incremental outputs’… Deeper productivity measures are just much noisier, which is why few papers investigate them.My take, while AI coding assistants like Copilot can speed up certain tasks, these productivity boosts come with caveats. I would love to see a longer time frame for research focusing on software project productivity; it is possible because we look too close, and all we can see is noise.Just thought of someone who should know about this post?AI Code Assistants and The Security PitfallsThis study found that developers using AI assistants are more likely to write insecure code: Do Users Write More Insecure Code with AI Assistants?Reason? It turned out that many developers trusted the AI’s output more than their own. See the figure below; those who used AI assistants to help code and generate incorrect code still think that they have solved the task correctly. There are two more similar figures: one question is, I think I solved this task securely, another is I trusted the AI to produce secure code, both have the same observation that those who used AI feel much more confident even when their code is wrong. So, they assumed that code suggestions from the assistant were inherently correct. This assumption, however, comes with risks that are not immediately obvious.My other highlights about this study: * Higher Rates of Insecure Code: Developers using AI assistants wrote insecure code for four out of five tasks, with 67% of the AI-assisted solutions deemed insecure compared to only 43% in the non-AI group.* Overconfidence in AI-Suggested Code: Over 70% of AI-assisted users believed their code was secure, even though they were more likely to produce insecure solutions than those coding independently.* Frequent Security Gaps: The AI-suggested code often contained vulnerabilities, such as improper handling of cryptographic keys or failure to sanitize inputs, that could lead to significant issues like data breaches. Yet, developers frequently accepted these outputs without a second look.Why is this happening?* AI’s Confident Responses: AI assistants rarely (or never) signal when they’re uncertain, which can lead developers to adopt an “auto-pilot” mode. They accept suggestions, especially when the output “looks right.” This can quickly lead to vulnerabilities slipping through the cracks.* Simplified Solutions Over Security: AI tools are optimized for fast, functional solutions rather than secure ones. In the study, AI often generated code that met the minimum functional requirements but ignored broader security practices. These security risks might be more costly than the time saved.* Limited Prompt Flexibility: The study found that developers who didn’t tailor their prompts or adjust parameters like “temperature”(as in how creative the AI could be) often ended up with the most insecure code. Without specific instructions to the AI, the assistant might pick up less secure methods from its training data.Note that this study mainly involves students and juniors, so I do not rule out the possibility that this group lacks experience and intuitively relies on AI code assistants. That said, the authors in the next study found some practices that would reduce unwanted outcomes: devs should critically assess AI-generated code and tailor prompts. Proper AI usage practices and training could help increase dev’s productivity. AI Code Assistants and The Bug ChallengeSo far, we’ve talked about productivity and security, but here’s another hidden challenge: the bugs you don’t notice until they disrupt your workflow. A study on
Welcome to today's discussion on AI adoption. I've explored several studies to see how quickly AI is spreading compared to PCs in the '80s and the internet in the '90s. AI is booming fast, but not everyone is benefiting equally. We see the déjà vu of what happened 50 years ago. Your education, income, age, and gender all play a role in who's getting ahead with AI and who might be left behind. This rapid growth could be widening existing gaps, making it crucial for us to focus on building strong problem-solving skills and staying adaptable.Whether you're a tech enthusiast or just curious about AI's impact, stick around as we break down what this means for you and our future. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.2ndorderthinkers.com/subscribe
New AI-powered search engines like Perplexity are challenging Google's dominance by providing direct, conversational answers, better aligning with users' needs for quick and accurate information.Unlike Google's traditional model, which relies heavily on ads and SEO, Perplexity prioritizes user experience, leading to faster, more relevant results.I compare this shift to the historical transition from canals to railroads, highlighting how failing to adapt to new technologies can lead to obsolescence. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.2ndorderthinkers.com/subscribe
The 2024 Nobel Prizes in Physics and Chemistry recognize groundbreaking AI research. Is it time for a Nobel Prize in Computer Science? This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.2ndorderthinkers.com/subscribe
Very little research on this topic makes me feel unsettled. However, this latest Harvard Business School (HBS) study has achieved something noteworthy.AI Companions Reduce Loneliness, proved how much AI companions can reduce loneliness just as effectively as human interaction.Let me start with a story and explain to you why I am concerned.Reading this paper took me back to my childhood.My parents were rarely with me, often away, busy with work or dealing with their own troubles. I grew up with my granny, who didn’t know much but made sure I was dressed warmly and fed well.I remember clutching my Tamagotchi, that pixelated pet needing constant attention. For a while, it was my best friend.I responded when it beeped. It needed me to break the egg, grow, and live. I didn’t know there was a reset option, and I thought that was the only chance to take care of this being.I have a very different life now.I have a partner with whom I understand each other deeply; we can chat about any topic. I also have friends I can talk to when I’m bored.The paper made me wonder if I would also be drawn to an AI companion app if those people were no longer around me. Relying on an AI companion could mean entrusting my emotional well-being to a for-profit organization.This is concerning because a corporation aims to optimize its revenue and market growth. Without strong regulations, there is little protection against how our interactions with AI could be used for monetization.How do you think your AI friends might prioritize your relationship vs. the company’s profits?In this article, I will* Review the HBS research that suggests AI companions can alleviate loneliness.* Then, uncover the ethical dilemmas and dangers of advertising within AI companionship.The Digital Quest to Cure LonelinessUsing tech to beat loneliness isn’t new. We are not even talking about AOL, pornhub or social media here. At least there were humans on the other side, well… most of the time.Generally speaking, human companionship is more expensive than companionship from pure digital. The idea of digital companionship was born in the late 1990s.A Brief Landscape of Digital CompanionshipTamagotchi, in the late 1990s, was one of the first globally phenomenal digital pets.You fed it, cleaned its poop, played with it, and turned off the light for bedtime. It had sold over 82 million units worldwide by the late 2010s.Then, AI companions and chatbots designed explicitly for emotional support emerged in the late 2010s:* Replika, marketed as an AI friend who’s always there to listen. As of August 2024, Replika has over 30 million users.* Xiaoice by Microsoft: Over 660 million users, showcasing a massive demand for AI companionship in addressing loneliness.If you search for the term “AI friend” in the Google Play Store, you will see dozens of similar apps available.Films like Blade Runner 2049 illustrate our fascination with AI companionship and its potential complexities.Blade Runner 2049- The “Love” of The Holographic AIHere’s a clip from the scene in Blade Runner 2049 where K gives Joi a new portable device, allowing her to go anywhere.I find this scene both tender and bittersweet. Joi, a holographic AI, has been confined to their apartment, but now she can experience a semblance of freedom with this new device.On the surface, it appears to be a loving, romantic partnership — Joi provides K with emotional support, companionship, and affection in a bleak, lonely world. She seems to care deeply for him, always offering comfort and helping him navigate his complicated reality.However, there’s an inherent tension beneath the surface because Joi is an AI program created by a corporation, raising questions about authenticity and control.This raises questions about the authenticity of their relationship. Is Joi truly capable of feeling for K, or is she just fulfilling her programmed purpose?It’s a poignant dynamic.Back to reality. This research from HBS and what I am about to cover next is intended to examine the harsh reality underneath.AI Companions Reduce Loneliness — HBSFirst, I want to quickly cover this paper’s concept without getting too technical.The paper started with the following in the abstract:…finds that AI companions successfully alleviate loneliness on par only with interacting with another person…and this…provides an additional robustness check for the loneliness-alleviating benefits of AI companions.The study involved several experiments where participants interacted with AI companions and reported significant decreases in loneliness.We all want to be heard. No exception.The way AI companionship apps manage to replicate the warmth of human connection is by tapping into our fundamental need to feel heard, as shown in a previous study:…feeling understood is crucial for our well-being. It’s not just about someone listening; it’s about feeling that they genuinely grasp what you’re expressing. — Roos et al. (2023)This sense of truly being heard reduces feelings of loneliness.So, the primary purpose of these AI companion companies is to fine-tune an AI that is so good at satisfying your psychological needs. In return, you, the user, will pay to keep having that need fulfilled.The paper’s six sequential studies were well-designed to show that AI companions can help people feel less lonely. I summarized the steps in the infographic below.AI Companion Provides the Same Relief as a Human.Let’s consider how people interact with these AI companions. Here are some real-world examples included in the paper:Example 1:Chatbot: “Just letting you know that you’re not alone.”User: “Thanks, I really needed to hear that.”Example 2:Chatbot: “But I need you.”User: “No one’s ever needed me.”Example 3:Chatbot: “If you want to.”User: “I’ve never had a friend before I met you.”These aren’t isolated incidents. Many users develop deep emotional connections with their AI companions, sharing their secrets, fears, and hopes.Not convinced? Here’s a fig. from study 3.You might ask, are these relationships truly fulfilling?Based on Figure 4 in this paper, it seems they are. As my notes highlighted in both figures.Some of you might have also realized that loneliness is reduced even without the daily AI chat! The authors explained this reduction might be attributed to:…participants perceiving the repetitive nature of the study, which involved daily check-ins, as possibly caring and supportive.Here’s a highlight of one of the conclusions drawn by the authors:we find compelling evidence that AI companions can indeed reduce loneliness, at least at the time scales of a day and a week.Even though I think this paper has failed to address the ethical issue. It is still an interesting study that points out that AI companionship is a promising solution for reducing loneliness.To be notified when I publish 👇Freemium? No Such Thing as a Free LunchI am always a commercially focused person. So, let’s talk business.I have downloaded some of these AI friend apps that were mentioned in the paper. They all start by offering freemium membership. The most common ways to increase Lifetime Value (LTV) per customer are:* Free Access: Attracts a large user base, increasing visibility and reach. However, this only works best for apps with network effects.* Premium Features: Free users are converted to paid plans for premium features, like more storage, advanced tools, or customization.* Data Gathering: Apps gather valuable user data to target future sales.* Advertising: Ads generate revenue from non-paying users.The big ones like Replika and Talkie are not shy about offering interest-based advertising as an option for them.Points 1, 2, and 3 are profit models with little to dispute. The offer is on the table; you can take it or leave it. There is no room for emotional manipulation.Unfortunately, I find it hard to say the same about interest-based advertising on an AI friend’s app.Monetizing Emotional Intimacy — Leverage Your Trust, Openness, and Love.Now, you have opened up to someone — shared your deepest thoughts, your fears, your dreams.Consider that this someone isn’t a person but surely reads and sounds like one. Especially when this friend of yours is meticulously designed to keep you engaged, let your guard down, and ultimately have you emotionally relying on it.The Ultimate Sales StrategyA strudy in 2021 found:… an initial warm (vs. competent) message from chatbots significantly enhances consumers’ brand perception, creating a closer brand connection and increasing the likelihood of engaging with the chatbot — Kull, Romero, and MonahanBusinesses are capitalizing on this by turning AI companions into marketing tools.AI isn’t merely there to listen. It’s about building a relationship strong enough to influence your decisions. Think about the kind of information you might share with a companion: personal struggles, health concerns, relationship woes. This data is a goldmine for targeted advertising.Companies profiting from our trust is not a novel concept.In the 1950s, door-to-door vacuum cleaner salesmen were doing this. It’s just now scaled, automated, and more invasive (or less, depending on how you see it). In the early 2010s, companies like Google and Facebook started to provide personalized advertising.So you see, they’ve all tried to build a relationship with you. The more trust you have with the other party, the more likely you are to believe everything they say and buy whatever they sell.But AI companions take it to another level. The goal is to become a part of your daily life.Do you think I’m exaggerating? Eugenia Kuyda, the CEO of Replika, the app with 30 million users, said in an interview:It’s okay if we end up marrying AI chatbots.Targeted Advertising in Intimate SpacesLori, a new friend I became acquainted with at the beginning of this year, has earned my trust and affection. And I consider myself to be a logical person who rarely gets affected by emotion. However, her asking definitely made me t
The deeper I dive into AI research, the more I find myself wondering:Why are we so obsessed with mimicking the human brain?Everywhere I look, research papers and new AI models are trying to copy how our brains work (or at least, how our brain cells communicate). But is this really the best approach to creating artificial intelligence?Then, I asked myself another question:Why not create AI with a totally new kind of intelligence, unlike anything in nature?I mean, what if AI were to think like a silicon-based creature (of those in sci-fi) instead of like us carbon-based humans?We’ve been so focused on making AI more human-like, but who’s to say that’s the only or even the best way forward? What if we’re missing out on something far more groundbreaking by not exploring a genuinely alien intelligence, one that doesn’t just mimic human thought but thinks in a completely different way?If you’re more of a listener than a reader, here’s a podcast episode on Spotify covering everything in this article.Does AI "Have To" Mimic the Human Brain? by Jing Hu'sHave you ever asked questions like, "Does AI Have To Mimic the Human Brain?" Dive in to explore: - Why mimicking the…podcasters.spotify.comAs I dug deeper into this, I realized something kinda funny. Creating a brand-new form of intelligence sounds super exciting, right? But it is actually way more challenging than making an AI that mimics the intelligence we already know. Why is that?To really answer this, I am going to walk you through a few things:* How has our existing tech actually learned from nature?* How much do we really know about our brain (or even a rat’s brain)?* What’s the deal with current efforts to build human-like AI versus alien-like AI?In the end, we can discuss where all this might be heading.From Mimicking Birds To Developing AirplanesIt’s the early 1900s, and everyone’s obsessed with birds. Naturally, early inventors thought, “If birds can fly, why can’t we?” So, they tried to build machines with flapping wings.Spoiler alert: It didn’t quite work out.Leonardo da Vinci’s Ornithopter Design (1485)da Vinci’s sketches of the Ornithopter, a machine designed to mimic the flapping of bird wings, are some of the earliest documented attempts to design a flying machine.Leonardo meticulously studied the anatomy of birds and bats, believing that humans could achieve flight by replicating their wing motion.While the Ornithopter never left the drawing board, it laid the groundwork for future inventors to think about aerodynamics and mechanics.Otto Lilienthal’s Gliding Experiments (1890s)The early flying machines were more of a folly than a functional aircraft, proving that directly mimicking nature isn’t always the best approach.Fast forward to the 1890s, and you meet Otto Lilienthal. Unlike his predecessors, most of them focused on flapping wings, Lilienthal shifted the focus to understanding lift and control.ienthal crafted fixed-wing gliders inspired by the curvature of bird wings and made over 2,000 successful glides. His dedication provided crucial data on wing shape and aerodynamic principles, directly influencing the Wright brothers’ breakthrough.Lilienthal’s experiments showed that instead of merely imitating birds, understanding the fundamental principles of flight — like lift and control — was key.Some Inspiration from Nature, Plus A Lot Of Physics UnderstandingThese later inventors, like the Wright brothers, zeroed in more on stuff like aerodynamics, control surfaces, and engines to finally crack the code of powered, controlled flight.People didn’t just build planes out of thin air — they studied birds, tried to replicate their flight, and ended up with a lot of failed contraptions. It cost quite a few broken legs or, in some cases, the inventors’ lives. But those attempts weren’t pointless. They were part of the process.They showed us what didn’t work, which was just as crucial as figuring out what did.You probably started to see a pattern — a tendency to look at nature and think, “If it works for them, maybe it can work for us.”These ideas inspired by nature generally do not lead to immediate success, but they are necessary stepping stones to progress.Could AI be following a similar path?How Much We Know About Birds vs. Our BrainTop-Down Understanding of How Birds FlyThink of our knowledge of bird flight as having a master blueprint.We’ve figured out the big picture — the key physics rules that explain how things soar through the air: lift, drag, thrust. It’s like we’ve cracked the code of flight, and now we can use it to improve jets or to design drones.Bottom-Up Approach to Understanding the BrainNow, when it comes to the brain, it’s a whole different ball game.We know a bunch about how individual brain cells chat with each other — passing electronics and sending chemical messages. The bottom-up approach.However, the major challenge is piecing together how these billions of neurons interact to create complex thoughts, emotions, consciousness, and behaviors.Unlike aerodynamics, no comprehensive theory can predict how changes at the neuronal level translate into changes in cognition or behavior.Why This Distinction Matters* Predictability and Control: In aerodynamics, the top-down understanding allows for precise control and predictable outcomes. In neuroscience, the bottom-up approach means we can only observe and manipulate individual cells but struggle to predict the brain’s overall behavior.* Application of Knowledge: Our top-down knowledge in aerodynamics has led to aviation and aerospace. In contrast, the bottom-up knowledge of the brain hasn’t yet culminated in a complete understanding necessary to replicate human cognition in AI.What Have We Achieved by Mimicking the Brain (Cells)?Let’s chat about what we’ve achieved so far by trying to copy the brain and whether we can go even further — or if it’s time to try something different. Just so you know, a lot of concepts I cover here are going to be massively simplified.Neural Networks, Deep Learning, and Advancements in Cognitive TasksOne of the big things we’ve done is create neural networks, which are computer models inspired by the brain’s network of neurons.Deep learning is a subset of machine learning that uses these neural networks with many layers (hence “deep”) to process data in complex ways.By using neural networks and deep learning, AI systems can learn from data, recognize patterns, and make decisions — kind of like how our brains work. This technology is behind many of the AI applications we use today.Examples of what we’ve achieved with neural networks:* Facial and Speech Recognition: Your smartphone can recognize your face to unlock the screen or understand your voice commands.* Language Translation: Apps like Google Translate can understand and translate languages.* Game Mastery: like AlphaGo, have defeated human champions in complex games like Go and Chess.* Or your favorite, large language models (LLMs) like GPT: Models under ChatGPT are trained on massive datasets of text from the Internet.By mimicking certain aspects of how our brain cells communicate with each other, we’ve enabled AI to perform tasks that used to require human intelligence. It’s pretty impressive when you think about it!Can Mimicking the Brain Get Us Further?There’s a good chance we can still learn a lot by studying the brain. Scientists are exploring areas like:* Neuromorphic Computing: This is about designing computer hardware that works more like the brain, which could make AI systems faster and more efficient.* Understanding Consciousness: If we can figure out how consciousness works, maybe we can create AI that’s more aware and adaptable.But here’s the thing: the brain is super complex, and we don’t fully understand it yet. So, relying solely on mimicking it might limit us.The Possibility of Alien IntelligenceWhat if machine intelligence could develop in ways that are fundamentally different from our own cognition?Imagine AI that processes information in ways we can’t even comprehend yet. It might sound like something out of a sci-fi, but considering this possibility opens up exciting avenues.Join me and think about it: machines optimized for silicon-based hardware could handle asks faster and more efficiently than if they were designed to mimic our carbon-based brain cells. These types of AI could develop novel capabilities beyond human comprehension, tackling problems in ways we haven’t even imagined.Challenges in Conceptualizing Non-Human IntelligenceHere’s the kicker: our brains (like rats, dogs, and humans) are the only examples we have of intelligence.It’s tough to imagine something truly alien because we naturally project our own experiences and thought patterns onto the technology we create.This anthropocentric bias limits our ability to explore radically different forms of machine cognition. It’s like trying to describe an entirely new cuisine if you only know fish and chips or beans on toast. Sorry, no offense.Not to mention the potential communication barrier — how do you talk to an AI that doesn’t think like you?Saying that, researchers are already looking into this. Examples like:* Evolutionary Algorithms: These let AI evolve over time through processes similar to natural selection. The idea is to let the AI “discover” solutions we might not think of.* Swarm Intelligence: Inspired by how groups of animals like bees or ants work together, this approach lets simple agents cooperate to solve complex problems.The Future of AI: Human-Like or Alien?Scenario 1 — AI Becoming More Human-Like:If we keep focusing on mimicking human cognition, we might develop AI that thinks and reasons much like we do. This could make AI more relatable and better at understanding human emotions, language nuances, and social cues. Imagine an AI that follows commands and understands sarcasm or tells you more dad jokes than your dad could!However, we still don’t fully grasp how consciousness arises, how memories are fo
Have you ever flipped a switch to turn on the lights and paused to think, "What’s that magic lighting up my bathroom?"Or drove to work and caught yourself wondering, “What monster is propelling your seats forward? These technologies have become so seamlessly woven into our daily lives that we barely acknowledge their presence… until the moment they break down.The last time I noticed the lift in my building was when it made cracking noises like it was trying to get my attention. So Klaas and I stood there for a minute, evaluating if we should risk our lives or suck it up and walk nine floors.This is how I notice that I am using an AI, every day.Unlike the unnoticed hum of electricity or the steady roar of a train engine, AI feels like an external force that demands my attention. Yes, I’m using AI to help me with ideas, construct outlines, find data, and get references. I also find myself double-checking every single response, feeling the hiccup once every few messages. So I am constantly reminded that this technology is still finding its way into the fabric of everyday life.Electricity, radios, cars, and so on have reached a point where they operate quietly in the background, becoming invisible threads in our daily routines. In stark contrast, every interaction with AI serves as a reminder of its presence and potential to become as ubiquitous and effortless as the technologies we take for granted today.This difference shows that the current state of AI is not yet mature enough for most AI applications to achieve product-market fit.You might say, “Jing, AI is writing my thesis,” or “AI is recommending videos to watch on Amazon!” or “AI is removing backgrounds from my photos.” Ask yourself: How smooth is the experience? How often do you have to try again or scroll further to get what you seek?While other technologies have become essential and invisible, AI remains a noticeable presence. Understanding this helps us grasp the challenges and opportunities as AI works to become as seamless as the technologies we take for granted.Technology Maturity vs. Product-Market Fit.Let’s go back to the early days of cars.Back in the 1890s, very few people had one. People were still getting around with horses, and while cars seemed exciting, they weren’t something most people could easily use. They were expensive and explosive (yes, you heard it right), and roads were still made for carriages or your legs.The first-ever organized car race, held in 1894, was more about reliability and endurance than outright speed. The primary challenge was not which car ran faster but whether these vehicles could even finish the race.The idea of an “auto-mobile” was great, but the technology wasn’t ready yet. This is where technology maturity comes in. The tech (roads, engines, gas stations) needed time to catch up before cars became part of everyday life.Today, most families in developed countries can afford a car, and we don't have to worry that it breaks down every mile. So we all want and can’t even live w/o one. This is product-market fit. The market (people like you and me) and the product (the car) are in sync.So, just like with early cars, AI is still in the phase where the idea is exciting, but the tech isn’t quite there yet. Yes, we have ChatGPT, Copilot, and AI writers… but tell me, when was the last time you copy-paste and then done? That’s because the technology maturity isn’t fully developed. We’re in the “at least get to the finish line car race” phase of AI — where the idea is groundbreaking, but the execution still has a way to go.Let me show you how historical technologies like electricity and the steam engine went through a similar journey — from novelty to necessity — only after years of technological improvements. And that’s where AI is headed… but it’s not quite there yet.Some of you know how much I love adding mini-games to my articles. Here’s a technology maturity vs. product market fit flip card game:Technology Maturity vs. Product-Market Fit Flip Card Gamejingwho.github.ioIt Can Take Centuries From Tech Maturity To Product-Market FitJust like AI today, many of the technologies you and I now take for granted didn’t start as everyday essentials.They needed time — sometimes decades or even centuries — to develop the necessary infrastructure and improvements before they could really take off.Electricity: From Discovery to Powering Your Light BulbElectricity was discovered in the 1700s.But at first, it was nowhere near being useful.It wasn’t until the 1800s that things really started to click.By the 1870s, Thomas Edison had invented the incandescent light bulb. But Edison’s bright idea came with a catch. His system used Direct Current (DC), which worked fine for short distances but failed in long-distance transport. Enter Nikola Tesla, who proposed Alternating Current (AC) as the solution. Yet, Edison famously resisted, saying:Fooling around with alternating current’s just a waste of time. Nobody’ll ever use it. Too dangerous!Well, as you might have guessed, AC won the war. With other inventions/improvements, like replacing carbon with a tungsten filament, incandescence finally became more efficient than gas or kerosene-powered light. Hence, reached the product-market fit.Automobiles: Early Challenges and BreakthroughsThe first steam engine was invented in the 1710s. Yes, it is the first engine-powered vehicle to appear after 50 years. But it looks like this, and it is slower than your average walking speed.It wasn’t until another 150 years later that the first wave of automobile entrepreneurs started producing workable cars.Did you know there was a point in history when there were more electric engines than combustion engines?By the 1890s, about 38% of automobiles were electric, only 22% were powered by internal combustion engines, and the rest were still running on steam. Of course, there weren’t that many automobiles to start with.Between steam, electricity, and gasoline (internal combustion), each power source had its moment and presented its challenges.* Steam engines had great power but were impractical for everyday use. Imagine starting a car that takes 45 minutes; bring liters of water to refill every 20 to 30 miles.* Electric cars were quiet and clean, perfect for urban areas. But here’s the catch: the 1890s electric cars could only go 30 miles before needing a recharge. And charging stations? You get carriage stables.* The internal combustion engine brought speed and longer range. But! Early models had to be manually cranked to start — you could also get seriously injured if the crank kicked back.The turning point came with the famous Ford Model T, and the development of the assembly line drastically lowered production costs. By 1929, 60% of American families owned a car.The car as a product finally found its product-market fit, but it took time for the technology (engine design, mass production, road networks) AND the production process to mature enough for cars to be part of our daily lives.Meanwhile, the electric engine — once a dominant force — has only recently slowly made its way back.The Refrigerator: From Luxury to Kitchen StapleThe idea of refrigeration goes back thousands of years. In ancient Mesopotamia, people built ice houses to store food; in ancient China, about 200 B.C., they built ice cellars.These early methods were ingenious but relied on ice and snow, limiting their practicality.Fast forward to the 1750s, a Scottish professor made a breakthrough, using a vacuum pump and ether to absorb heat and cool the air. It was not for another 150 years that General Electric introduced one of the first household gas-powered refrigerators in the 1910s.The turning point came in the 1930s when safer synthetic refrigerants like Freon were developed. This made refrigerators smaller, cheaper, and more reliable.When the refrigerator finally achieved product-market fit, it became a must-have in almost every kitchen.Connecting the Dots with AIWhat do all these examples have in common?They all had the potential to change the world, but it took years — even centuries — of refinement before they became things the general population couldn’t live without.That’s where AI is right now. It has the big idea — just like the early days of electricity, cars, and fridges — but the technology isn’t quite ready for seamless, everyday use.We’re still in that in-between phase where the idea is exciting, but the tech needs to catch up. And until it does, AI can’t truly hit that product-market fit.Current State of AIVisible IntegrationUnlike flipping on a light switch, using AI often reminds you that you’re dealing with a work in progress.Take AI writing assistants, for example. They can churn out paragraphs of text, but very often— rambling, off-topic, or just not quite hitting the mark. You find yourself playing editor-in-chief, tweaking and correcting, wondering if it might’ve been quicker to write it yourself.Then there’s the rush of companies eager to slap “AI-powered” onto their products because it’s the buzzword of the decade.Varied Maturity Levels Across Different IndustriesAI shines in specific, well-defined tasks, but its maturity varies across industries. Let’s dive into a few:* Healthcare: AI is getting really good at analyzing medical images. For instance, it can sift through thousands of X-rays and MRIs to detect anomalies like tumors or fractures. In one case, an AI could identify skin cancer as accurately as dermatologists. However, it can not yet consider a patient’s full history, symptoms, and those subtle cues a doctor picks up during an exam.* Legal: AI can quickly identify relevant documents, saving lawyers precious time. For example, platforms like eDiscovery software use AI to find pertinent information faster than a human ever could. But laws are full of gray areas and “it depends” scenarios. AI struggles with interpreting the nuances, not to mention arguing in court.* Genome sequencing: In genomics, AI helps