DiscoverEl Podcast
El Podcast
Claim Ownership

El Podcast

Author: El Podcast Media

Subscribed: 21Played: 532
Share

Description

In El Podcast, anything and everything is up for discussion. Grab a drink and join us in this epic virtual happy hour!
167 Episodes
Reverse
Steven Ross Pomeroy, Chief Editor of RealClearScience, joins the podcast to discuss NASA’s abandoned nuclear propulsion programs, the future of AI and white-collar work, the rise of “scienceploitation,” and how information overload is reshaping human cognition.GUEST BIO:Steven Ross Pomeroy is a science writer and Chief Editor of RealClearScience. He writes frequently for Big Think, covering space exploration, neuroscience, AI, and science communication.TOPICS DISCUSSED:NASA’s nuclear propulsion program (1960s–1970s)Why nuclear rockets were abandonedDifferences between chemical, nuclear thermal, and nuclear electric propulsionUsing the Moon as a launch hubMoon-landing skepticism & conspiracy thinkingThe future of space miningAI adoption trends & hidden usageAgentic AI vs chatbotsJob displacement: white-collar vulnerabilityHigher ed, skills, and career advice“Scienceploitation” and how marketing hijacks scientific languageImmune-system myths & quantum wooInformation overload and Google/AI-driven forgettingCritical thinking in the AI eraThe myth of speed readingHow vocabulary and deep engagement improve comprehensionMAIN POINTS:NASA had functional nuclear-rocket tech in the 1960s, but political priorities, budget cuts, and waning public interest ended the program.Nuclear thermal rockets are ~2x as efficient as chemical rockets; nuclear electric propulsion could unlock deep-space exploration and mining.Space mining is technologically plausible, but its economic impact (like crashing gold prices) creates new problems.AI adoption is much higher than official numbers—many workers use it quietly and off the books.Companies see low ROI today because they’re using simple chatbots, not advanced “agentic” systems that can take multi-step actions.White-collar jobs — not blue collar — are being automated first.Scienceploitation hijacks scientific buzzwords (“quantum,” “immune-boosting,” “natural”) to sell products with no evidence.We process 74 GB of information per day, roughly a lifetime’s worth for a well-educated person 500 years ago.Speed reading works only by sacrificing retention; the real way to read faster is to build vocabulary and deep attention.Skepticism, not cynicism, is the core skill we need in the AI-mediated media environment.TOP 3 QUOTES: “It would’ve been harder to fake the moon landing than to actually land on the moon.”“Companies aren’t getting ROI from AI because they’re only using chatbots. The real returns come from agentic AI — and that wave is just beginning.”“We now process 74 gigabytes of information a day. Five hundred years ago, that was a lifetime’s worth for a highly educated person.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
A wide-ranging conversation with Northeastern’s John Wihbey on how algorithms, laws, and business models shape speech online—and what smarter, lighter regulation could look like.Guest bio: John Wihbey is a professor of media & technology at Northeastern University and director of the AI Media Strategies Lab. Author of Governing Babel (MIT Press). He has advised foundations, governments, and tech firms (incl. pre-X Twitter) and consulted for the U.S. Navy.Topics discussed:Section 230’s 1996 logic vs. the algorithmic eraEU DSA, Brazil/India, authoritarian modelsAI vs. AI moderation (deepfakes, scams, NCII)Hate/abuse, doxxing, and speech “crowd-out”Platform opacity; case for transparency/data accessCreator-economy economics; downranking/shadow bansDead Internet Theory, bots, engagement gamingSports, betting, and integrity (NBA/NFL)Gen Z jobs; becoming AI-literate change agentsTeaching with AI: simulations, human-in-loop assessmentMain points & takeaways:Keep Section 230 but add obligations (transparency, appeals, researcher access).Europe’s DSA has exportable principles, adapted to U.S. free-speech norms.States lead on deepfake/NCII and youth-harm laws.AI offense currently ahead; detection/provenance + humans will narrow the gap.Lawful hate/abuse can practically silence others’ participation.CSAM detection is harder with synthetics; needs better tooling/cooperation.News/creator models are fragile; ad dollars shifted to platforms.Opaque ranking punishes small creators; clearer recourse is needed.Engagement metrics are Goodharted; bots inflate signals.Live sports thrive on synchronization; gambling risks long-term integrity.Students should aim to be the person who uses AI well, not fear AI.Top 3 quotes:“Keep 230, but add transparency and obligations—we don’t need censorship; we need visibility into how platforms actually govern speech.”“AI versus AI is the new reality—offense is ahead today, but defense will catch up with detection, provenance, and human oversight.”“The platform is king—monetization and discoverability are controlled by opaque algorithms, and that unpredictability crushes small creators.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
Finance professor Spencer Barnes explains research showing postseason officiating systematically favors the Mahomes-era Chiefs—consistent with subconscious, financially driven “regulatory capture,” not explicit rigging.Guest bio: Dr. Spencer Barnes is a finance professor at UTEP. He co-authored “Under Financial Pressure” with Brandon Mendez (South Carolina) and Ted Dischman, using sports as a transparent lab to study regulatory capture.Topics discussed (in order):Why the NFL is a clean testbed for regulatory captureData/methods: 13,136 defensive penalties (2015–2023), panel dataset, fixed-effectsPostseason favoritism toward Mahomes-era ChiefsMagnitude and game impact (first downs, yards, FG-margin games)Subjective vs objective penalties (RTP, DPI vs offsides/false start)Regular season vs postseason differencesDynasty checks (Patriots/Brady; Eagles/Rams/49ers)Rigging vs subconscious biasRatings, revenue (~$23B in 2024), media incentivesGambling’s rise post-2018 and bettor implicationsTaylor Swift factor (not tested due to data window)Ref assignment opacity; repeat-crew effectsTech/replay reform ideasBroader finance lesson on incentives and regulationMain points & takeaways:Core postseason result: Chiefs ~20 percentage points more likely than peers to gain a first down from a defensive penalty.Subjective flags: ~30% more likely for KC in playoffs (RTP, DPI).Size: ~4 extra yards per defensive penalty in playoffs—small per play, decisive at FG margins.Regular season: No favorable treatment; slight tilt the other way.Ref carryover: Crews with a prior KC postseason official show more KC-favorable outcomes the next year.Not universal to dynasties: Patriots/Brady and other near-dynasties don’t show the same postseason effect.Mechanism: No claim of rigging; consistent with implicit bias under financial incentives.Policy: Use tech (skycam, auto-checks for false start/offsides), limited challenges for subjective calls, transparent ref advancement.General lesson: When regulators depend financially on outcomes, redesign incentives to reduce capture and protect fairness.Top 3 quotes:“We make no claim the NFL is rigging anything. What we see looks like implicit bias shaped by financial incentives.” — Spencer Barnes“It only takes one call to swing a postseason game decided by a field goal.” — Spencer Barnes“If there’s money on the line, you must design the regulators’ environment so incentives don’t quietly bend enforcement.” — Spencer BarnesLinks/where to find the work: Spencer Barnes on LinkedIn (search: “Spencer Barnes UTEP”); paper Under Financial Pressure in the Financial Review (paywall) and as a free working paper on SSRN (search the title). 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
How human babies, big brains, and social life likely forced Homo sapiens to invent precise speech ~150–200k years ago—and what that means for learning, tech, and today’s kids.Guest Bio:Madeleine Beekman is a professor emerita of evolutionary biology and behavioral ecology at the University of Sydney and author of Origin of Language: How We Learned to Speak and Why. She studies social insects, collective decisions, and the evolution of communication.Topics Discussed:Why soft tissues don’t fossilize; language origins rely on circumstantial evidenceThree clocks for timing (~150–200k years): anatomy; trade/complex tech/art; phoneme “bottleneck”Why Homo sapiens (not Neanderthals) likely had full speechLanguage as a “virus” tuned to children; pidgin → creole via kidsSecond-language learning: immersion over translationBees/ants show precision scales with ecological stakesEvolutionary chain: bipedalism → narrow pelvis + big brains → helpless infants → precise speechOngoing human evolution (archaic DNA, altitude, Inuit lipid adaptations)Flynn effect reversal, screens, AI reliance, anthropomorphism risksReading, early interaction, and the Regent honeyeater “lost song” lessonUniversities, online classes, and “degree over learning”Main Points:Multiple evidence lines converge on speech emerging with anatomically modern humans ~150–200k years ago.Anatomical and epigenetic clues suggest only Homo sapiens achieved full vocal speech.Extremely dependent infants created strong selection for precise, teachable communication.Children’s brains shape languages; kids regularize grammar.Communication precision rises when mistakes are costly (bee-dance analogy).Humans continue to evolve; genomes show selected archaic introgression and local adaptations.Tech-driven habits may erode cognition and language skill; reading matters.AI is a tool that imitates human output; humanizing it can mislead and harm, especially for teens.Start early: talk, read, and interact face-to-face from birth.Top Quotes:“Only Homo sapiens was ever able to speak.”“Language will go extinct if it can’t be transmitted from brain to brain—the best host is a child.”“The precision of communication is shaped by how important it is to be precise.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
A candid conversation with psychologist Gerd Gigerenzer on why human judgment outperforms AI, the “stable world” limits of machine intelligence, and how surveillance capitalism reshapes society.Guest bio: Dr. Gerd Gigerenzer is a German psychologist, director emeritus at the Max Planck Institute for Human Development, a leading scholar on decision-making and heuristics, and an intellectual interlocutor of B. F. Skinner and Herbert Simon.Topics discussed:Why large language models rely on correlations, not understandingThe “stable world principle” and where AI actually works (chess, translation)Uncertainty, human behavior, and why prediction doesn’t improve muchSurveillance capitalism, privacy erosion, and “tech paternalism”Level-4 vs. level-5 autonomy and city redesign for robo-taxisEducation, attention, and social media’s effects on cognition and mental healthDynamic pricing, right-to-repair, and value extraction vs. true innovationSimple heuristics beating big data (elections, flu prediction)Optimism vs. pessimism about democratic pushbackBooks to read: How to Stay Smart in a Smart World, The Intelligence of Intuition; “AI Snake Oil”Main points:Human intelligence is categorically different from machine pattern-matching; LLMs don’t “understand.”AI excels in stable, rule-bound domains; it struggles under real-world uncertainty and shifting conditions.Claims of imminent AGI and fully general self-driving are marketing hype; progress is gated by world instability, not just compute.The business model of personalized advertising drives surveillance, addiction loops, and attention erosion.Complex models can underperform simple, well-chosen rules in uncertain domains.Europe is pushing regulation; tech lobbying and consumer convenience still tilt the field toward surveillance.The deeper risk isn’t “AI takeover” but the dumbing-down of people and loss of autonomy.Careers: follow what you love—humans remain essential for oversight, judgment, and creativity.Likely mobility future is constrained autonomy (level-4) plus infrastructure changes, not human-free level-5 everywhere.To “stay smart,” individuals must reclaim attention, understand how systems work, and demand alternatives (including paid, non-ad models).Top quotes:“Large language models work by correlations between words; that’s not understanding.”“AI works well where tomorrow is like yesterday; under uncertainty, it falters.”“The problem isn’t AI—it’s the dumbing-down of people.”“We should become customers again, not the product.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
How a tech insider who helped build billion-view machines explains the attention economy’s playbook—and how to guard your mind (and data) against it.Guest bio:Richard Ryan is a software developer, media executive, and tech entrepreneur with 20+ years in digital. He co-founded Black Rifle Coffee Company and helped take it public (~$1.7B valuation; $396M revenue in 2023). He’s built multiple apps (including a video app released four years before YouTube) with millions of downloads, launched Rated Red to 1M organic subscribers in its first year, and runs a YouTube network—led by FullMag (2.7M subs)—that has surpassed 20B views.Topics discussed:The attention economy and 2012 as the mobile/monetization inflection point; algorithm design, engagement incentives, and polarization; personal costs (anxiety, comparison traps, body dysmorphia, addiction mechanics); privacy and data brokers, smart devices, cars, geofencing; policy ideas (digital rights, accountability, incentive realignment); practical defenses (digital detox, friction, community, gratitude, boundaries); careers, college, and meaning in an AI-accelerating world.Main points:Social platforms optimize time-on-device; “For You” feeds exploit threat/dopamine loops that keep users anxious and engaged.2012 marked a shift from tool to extraction: mobile apps plus partner programs turned attention into a tradable commodity.Outrage and filter bubbles are amplified because drama wins in the algorithmic reward system.Privacy risk is systemic: data brokers, vehicle SIMs, and IoT terms build behavioral profiles beyond traditional warrants.Individual resilience beats moral panic: measure use, do a 30-day reset, add friction, and invest in offline community and gratitude.Don’t mortgage your life to debt or trends; pursue adaptable, meaningful work—every field is vulnerable to automation.Societal fixes require incentive changes (digital rights, simple single-issue bills, real accountability), not just complaints.Top 3 quotes:“In 2012, you went from using your iPhone to the iPhone using you.”“If you can’t establish boundaries and adhere to them, you have a problem.”“The spirit of humanity shines in the face of adversity—we love an underdog story, and this is the underdog story.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
Dr. Luke Kemp, an Existential Risk Researcher at the University of Cambridge shows how today’s plutocracy and tech-fueled surveillance imperil society—and what we can do to build resilience.Guest bio:Dr. Luke Kemp is an Existential Risk Researcher at the Centre for the Study of Existential Risk (CSER) at the University of Cambridge and author of Goliath’s Curse: The History and Future of Societal Collapse. His work examines how wealth concentration, surveillance, and arms races erode democracy and heighten global catastrophic risk.Topics discussed:The “Goliath” concept: dominance hierarchies vs. vague “civilization”Are we collapsing now? Signals vs. sudden shocksInequality as the engine of fragility; lootable resources & dataTech’s role: AI as accelerant, surveillance capitalism, autonomous weaponsNuclear risk, climate links, and system-level causes of catastropheDemocracy’s erosion and alternatives (sortition, deliberation)Elite overproduction, factionalism, and arms/resource/status “races”Collapse as leveler: winners, losers, and myths about mass die-offPractical pathways: leveling power, wealth taxes, open democracyMain points:“Civilization” consistently manifests as stacked dominance hierarchies—what Kemp calls the Goliath—which naturally concentrate wealth and power over time.Rising inequality spills into political, informational, and coercive power, making societies brittle and less able to correct course.Existential threats are interconnected; AI, nukes, climate, and bio risks share causes and amplify each other.AI need not be Skynet to be dangerous; it speeds arms races, surveillance, and catastrophic decision cycles.Collapse isn’t always apocalypse; often it fragments power and improves life for many outside the elite core.Durable safety requires leveling power: progressive/wealth taxation, stronger democracy (especially sortition-based, deliberative bodies), and curbing surveillance and arms races.Top 3 quotes:“Most collapse theories trace back to one driver: the steady concentration of wealth and power that makes societies top-heavy and blind.”“AI is an accelerant—pouring fuel on the fires of arms races, surveillance, and extractive economics.”“If we want a long future, we don’t just need tech fixes—we need to level power and make democracy real.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
A deep dive with historian Dr. Fyodor Tertitskiy on how North Korea’s dynasty survives—through isolation, terror, and nukes—and why collapse or unification is far from inevitable.Guest bio:Fyodor Tertitskiy, PhD, is a Russian-born historian of North Korea and a senior research fellow at Kookmin University (Seoul). A naturalized South Korean based in Seoul, he is the author of Accidental Tyrant: The Life of Kim Il-sung. He speaks Russian, Korean, and English, has visited North Korea (2014, 2017), and researches using Soviet, North Korean, and Korean-language sources.Topics discussed:Daily life under extreme authoritarianism (no open internet, monitored communications, mandatory leader portraits)Kim Il-sung’s rise via Soviet backing; historical fabrications in official narratives1990s famine, loss of sponsors, rise of black markets and briberyNukes/missiles as regime-survival tools; dynasty continuity vs. unificationWhy German-style unification is unlikely (costs, politics, identity; waning support in the South)Regime control stack: isolation, propaganda “white list,” terror, collective punishmentReliability of defectors’ accounts; sensationalism vs. fabricationResearch methods: multilingual archives, leaks, captured docs, propaganda close-readingElite wealth vs. citizen poverty; renewed patronage via RussiaCoups/assassination plots, succession uncertaintyNorth Korean cyber ops and crypto theft“Authoritarian drift” debates vs. media hyperbole in democraciesLife in Seoul: safety, civility, cultureMain points:North Korea bans information by default and enforces obedience through fear.Elites have everything to lose from change; nukes deter regime-ending threats.Unification would be socially and fiscally seismic; absent a Northern revolution, it’s improbable.Markets and graft sustain daily life while strategic sectors get resources.Collapse predictions are guesses; stable yet brittle systems can still break from shocks.Defector claims need case-by-case verification; mass CIA scripting is unlikely.Archival evidence shows key “facts” were retrofitted to build the Kim myth.Democracy’s victory isn’t automatic—citizens and institutions must defend it.Top 3 quotes:“There is no internet unless the Supreme Leader permits it—and even then, someone from the secret police may sit next to you taking notes.”“They will never surrender nuclear weapons—nukes are the guarantee of the regime’s survival.”“The triumph of democracy is not automatic; there is no fate—evil can prevail.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
Dr. Devon Price unpacks “the laziness lie,” how AI and “bullshit jobs” distort work and higher ed, and why centering human needs—not output—leads to saner lives.Guest bio: Devon Price, PhD, is a Clinical Associate Professor of Psychology at Loyola University Chicago, a social psychologist, & writer. Prof Price is the author of Laziness Does Not Exist, Unmasking Autism, and Unlearning Shame, focusing on burnout, neurodiversity, and work culture.Topics discussed:The laziness lie: origins and three core tenetsAI’s effects on output pressure, layoffs, and disposabilityOverlap with David Graeber’s Bullshit Jobs and status hierarchiesAdjunctification and incentives in academiaDemographic cliff and the sales-ification of universitiesCareer choices in an AI era: minimize debt and stay flexibleRemote work’s productivity spike and boundary erosionBurnout as a signal to rebuild values around care and communityGap years, social welfare, and redefining “good jobs”Practicing compassion toward marginalized people labeled “lazy”Main points:The laziness lie equates worth with productivity, distrusts needs/limits, and insists there’s always more to do, fueling self-neglect and stigma.Efficiency gains from tech and AI are converted into higher expectations rather than rest or shorter hours.Many high-status roles maintain hierarchy more than they create real value; resentment often targets meaningful, low-paid work.U.S. higher ed relies on precarious adjunct labor while admin layers swell, shifting from education to a jobs-sales funnel.In a volatile market, avoid debt, build broad human skills, and choose adaptable paths over brittle credentials.Remote work raised output but erased boundaries; creativity requires rest and unstructured time.Burnout is the body’s refusal of exploitation; recovery means reprioritizing relationships, art, community, and self-care.A humane society would channel tech gains into shorter hours and better care work and infrastructure.Revalue baristas, caregivers, teachers, and artists as vital contributors.Everyday practice: show compassion—especially to those our culture labels “lazy.”Top three quotes:“What burnout really is, is the body refusing to be exploited anymore.” — Devon Price“Efficiency never gets rewarded; it just ratchets up the expectations.” — Devon Price“What is the point of AI streamlining work if we punish humans for not being needed?” — Devon Price   🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
Dr. Joseph Crawford unpacks how AI is reshaping higher education - eroding student belonging, redefining assessment in a post-plagiarism era, and raising the stakes for soft skills.Guest bioDr. Joseph “Joey” Crawford is a Senior Lecturer in Management at the University of Tasmania and ranks among the top 1% of most-cited researchers globally. His work centers on leadership, student belonging, and the role of AI in higher education, and he serves as Editor-in-Chief of a leading education journal.Topics discussedAI in higher education and the “post-plagiarism” eraStudent belonging, loneliness, and mental health impactsMassification of education (8% → 30% → 50.2% participation)Programmatic assessment vs. essays/examsCOVID-19’s lasting effects on campus culture and learningRecorded lectures, flipped learning, and in-person tradeoffsSoft skills, leadership education, and employabilityAcademic integrity, peer review, and AI misuse by facultyLabor shortages, graduate readiness, and industry pathwaysSocial anxiety, AI “friendship,” and GPA outcomesMain points & takeawaysAI substitutes human support: Heavy chatbot use can provide a sense of social support but correlates with lower belonging and reduced GPA compared to human connections.Belonging matters: Human social support predicts higher well-being and better academic performance; AI support does not translate into belonging.Post-plagiarism reality: Traditional lecture-plus-essay or multiple-choice assessment is increasingly unreliable for verifying authorship.Assessment is shifting: Universities are exploring programmatic assessment—fewer, higher-stakes integrity checks across a degree instead of every course.Massification pressures quality: Participation in Australia rose from 8% (1989) to 30% (2020) to 50.2% (2021), straining rigor and prompting curriculum simplification and grade inflation.COVID + ChatGPT = double shock: Online habits and interaction anxiety from the pandemic compounded with AI convenience, reducing peer-to-peer engagement.Less face time: Many business courses dropped live lectures; students are now ~2 hours less in-class per subject, raising the bar for workshops to build soft skills.Workforce mismatch: Employers want communication and leadership; graduates often lack mastery because entry-level “practice” tasks are automated.Faculty risks too: Using AI to draft peer reviews can embed weak scholarship into training corpora and distort future models.Pragmatic advice: Don’t fear AI—use it—but replace lost micro-interactions with real people and deliberately practice human skills (e.g., leadership, psychology).Top quotes “We’re in a post-plagiarism world where knowing who wrote what is a real challenge.”“Some students are replacing librarians, peers, and support staff with bots—they’re fast, infinitely friendly, and never judge.”“AI social support doesn’t create belonging—and that shows up in grades.”“The lecture isn’t gone, but in many programs it’s recorded—and students now get less in-person time.”“Don’t substitute AI-created efficiency with more work—substitute it with more people.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
Author Eric Weiner argues that happiness depends less on wealth or location than on relationships, meaning, trust, and realistic expectations—while tech and social media often push the other way.Guest bio:Eric Weiner is a bestselling author and former NPR foreign correspondent whose books include The Geography of Bliss, The Geography of Genius, The Socrates Express, and Ben and Me. He writes about place, meaning, creativity, and how to live well.Topics discussed:The “where” of happiness vs. the “what/who”Nordic stability in the World Happiness ReportMoldova as a control case for unhappinessRelationships as the core driver of well-beingSocial media, AI, and the erosion of meaning/trustMoney, inequality, and the Easterlin paradoxU-shaped curve and Gen Z’s flatteningTravel as transformation (place as permission)Gross National Happiness (Bhutan) vs. GDPExpectations as the enemy of happinessMain points:Relationships matter most: “other people” are the two-word secret.Money helps only to a modest threshold; then diminishing returns.Inequality alone doesn’t predict happiness; trust does.Tech/social media amplify envy and faux-connection, sapping meaning.AI optimizes “good enough,” not creative leaps; it can erode trust.Gen Z shows worrying dips in meaning/connection post-2015 + pandemic.Travel reframes perspective; you can’t outrun yourself.Focus on process over outcomes; detach effort from results.Top quotes:“If I had to sum up the secret to happiness in two words: other people.”“Expectations are the enemy of happiness—invest 100% in effort, 0% in results.”“AI is dangerously seductive because it’s good enough—but creative leaps don’t come from averages.”“Social media are envy-generating machines.”“Trust is the hidden variable of happy societies.”“Technology promises time, but unstructured time doesn’t make us happier—meaning does.”Data points mentioned: U-shaped happiness across life; Gen Z may be an exception (smartphone ubiquity + pandemic).U.S. trust reversal: ~1960s ≈ two-thirds said “most people can be trusted”; recent polls ≈ two-thirds say the opposite.Easterlin paradox: happiness rises with income only up to a point.Gen Z snapshots (Harvard/Baylor cited in convo): ~58% lack meaning; ~56% financial concern; ~45% “things falling apart”; ~34% lonely. 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
A conversation with Stella Morabito on how the weaponization of loneliness—from Soviet propaganda to modern social media—threatens free speech, family, and community.👤 Guest BioStella Morabito – Writer and former CIA intelligence analyst specializing in Soviet propaganda and media during the 1980s. She is the author of The Weaponization of Loneliness: How Tyrants Stoke Our Fear of Isolation to Silence, Divide, and Conquer (2022) and a senior contributor at The Federalist.📌 Topics DiscussedMorabito’s CIA background analyzing Soviet propagandaThe concept of the “machinery of loneliness” and how tyrants exploit fear of isolationThe pandemic as a “dress rehearsal” for social control and social credit systemsEducation, political correctness, and social media as tools of conformityYuri Bezmenov’s four stages of ideological subversionThe role of “almost psychopaths” in totalitarian movementsAttacks on family, motherhood, and masculinity as destabilizing forcesGen Z’s shifting attitudes toward faith, family, and communityBuilding mediating institutions (family, faith, friendship) to resist centralization💡 Main PointsFear of isolation is a powerful tool used by tyrants throughout history, from the French Revolution to Mao’s Cultural Revolution.The pandemic revealed how easily fear could be weaponized to enforce conformity, resembling China’s social credit system.Education and media are central targets because they credential all other institutions and shape entire generations.Social media extends peer pressure 24/7, worsening youth mental health and magnifying political correctness.“Almost psychopaths” rationalize cruelty under pseudo-religions or ideologies and become enforcers of totalitarian conformity.Mediating institutions—family, faith, and community—are the strongest antidote to centralized control.Gen Z shows promise in resisting mainstream narratives and seeking meaning through faith and family, partly due to disillusionment from the pandemic.🗣️ Top 3 Quotes“The fear of isolation is hardwired into us… and it makes us not only miserable creatures, but easily manipulated.”“Free speech is a use-it-or-lose-it proposition. Once we stop speaking freely, we lose it.”“The ultimate goal of totalitarians is not money—it’s to control the mediating institutions of family, faith, and friendship.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
Neuroscientist explains why school crushes creativity—and how to fix it—teaching “primal intelligence” and special-operations tactics you can use at work, at home, and in the classroom to think and innovate better.Guest Bio: Dr. Angus Fletcher is a neuroscientist and professor of Story Science at The Ohio State University. He studies how intuition, imagination, emotion, and common sense work in the brain and advises U.S. Special Operations, Fortune 50 firms, and schools on creativity and resilience. His new book is Primal Intelligence: You Are Smarter Than You Know.Topics Discussed:Creativity decline starting ~3rd grade; standardized testing & sit-still schoolingData vs. volatile reality; limits of AI/logic vs. human neural toolsSpecial Operations creativity pipeline; training vs. selection“Why”-free inquiry (who/what/when/where/how) to deepen relationships & learningUnlearning dependency on external answers; experiential learningPersonal story as plan/plot; fear, anxiety, and outsourcing your storyJobs, Shakespeare, and intensifying uniqueness; innovation beyond “grind” and “hack”“Eat your enemy”: learning asymmetrically from competitorsMedication, signals, and growth; tuning anxiety as a sensorMyths like left-brain/right-brain; labels vs. open-ended growthMain Points:Schooling often conditions “there’s a right answer and the teacher has it,” which suppresses creativity and initiative.Data predicts yesterday; real life is volatile. Human neurons support non-computational tools—intuition, imagination, common sense—vital for innovation.Creativity can be trained: Special Ops methods and experiential learning reliably build it.Skip “why” in discovery conversations to avoid premature judgments; stay curious with who/what/when/where/how.Reclaim your personal story; fear pushes people to borrow others’ plans, which erodes meaning.Innovation strategy: identify exceptions and intensify them (Jobs), and “eat your enemy” by absorbing rivals’ unique strengths.Emotions are signals; meds can be triage, but durable growth comes from engaging hard experiences.Left/right-brain personality labels are misleading; biological growth thrives on branching diversity.Top Quotes: “School trains kids to solve math problems, not life problems.”“Skip the ‘why’—the moment you jump to why, you stop learning.”“Your story is your plan. Fear makes you outsource it.”“Anxiety is a calibrated sensor, not a flaw.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
Homeowner-advocate Shelly Marshall explains why many HOAs function like private governments—often stripping owners’ rights—and how to protect yourself (or avoid them entirely).Guest bioShelly Marshall is a homeowner advocate and author of HOA Warrior. After battling abusive HOA boards in her own community, she’s spent 15+ years researching HOA law, advising homeowners, and pushing for reforms nationwide. She can be reached at info@hoawarrior.com and hoawarrior.com. She can be reached at info@hoawarrior.com and hoawarrior.com.Topics discussedHow Shelly became an HOA advocate after a hostile board takeoverBoards changing rules without homeowner votes; covenant enforcement gapsLiens, fines, special assessments, and foreclosure riskWhy management companies and industry trade groups (e.g., CAI) shape incentivesLegal exposure: joint liability, collateralization, and lack of transparencyHorror stories: lawns, hoses, swing sets, condemned structures, and jail timeBuying vs. renting; LLCs for limited protection; why “one election away from disaster”What due diligence (doesn’t) solve; legislative reform efforts and limitsPractical survival tips if you’re already in an HOAMain points / takeawaysBuying into an HOA is entering a business partnership with neighbors; your property can be leveraged, and you share liabilities.Boards often wield broad power, sometimes changing or selectively enforcing rules with limited transparency.Fines, fees, and special assessments can exceed mortgages and trigger foreclosures—even for minor “violations.”Industry actors (management companies, banks, attorneys) have financial incentives that can work against homeowners.Litigation is costly and asymmetric; few attorneys take homeowner cases.If you must buy, an LLC (cash purchase) offers better protection; otherwise, renting avoids systemic risks.If you’re already in an HOA: pay first, appeal later; avoid being labeled a “troublemaker”; document everything.Legislative fixes help only marginally; structural incentives remain misaligned.Top quotes“You don’t buy a home in an HOA—you buy into a business with all your neighbors.”“They can change the rules after you’ve moved in, often without your vote.”“One election away from disaster—every single time.”“Your house can become collateral for loans you didn’t know existed.”“Pay the fine first, fight later—escalation is how homeowners lose homes.”“My advice? Don’t buy into an HOA. If you must live there, rent.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
A spirited debate between Chadwick Turner and Emmanuel Maggiori on whether AI is a transformative technology or overhyped disruption, exploring its impact on jobs, society, and the economy.👥 Guest BiosDr. Emmanuel Maggiori – London-based software engineer, writer, and speaker. Author of Smart Until It’s Dumb, Siliconned, and The AI Pocketbook. Has spent a decade building machine learning systems for large-scale applications.Chadwick Turner – Seattle-based creative technologist and strategist, founder of Burnpiles, a consultancy helping organizations innovate with AI, immersive media, and digital strategy. Formerly led business development at Amazon and Meta.🗂️ Topics DiscussedHype vs. reality of AI as transformative vs. disruptive technologyHistorical parallels with VR, no-code, and industrial revolutionsAI’s limitations: hallucinations, lack of extrapolation, long-tail problemJob disruption: automation, creative agencies, translators, paralegals, truckersEconomic theory of production, labor, and technology’s role in growthEducation: cognitive decline, plagiarism, and assessment challengesAI plateaus: “peak AI” without methodological breakthroughsBusiness realities: building sustainable products vs. hype-driven failures💡 Main PointsChadwick’s Position – AI is likely the most disruptive technology in history, with potential 10/10 impact if breakthroughs arrive. Even at today’s plateau, it will reshape industries, automate repetitive work, and disrupt the economy.Emmanuel’s Position – AI is overhyped and limited by methodological flaws (hallucinations, lack of reasoning). Impact is real but moderate (4/10), closer to previous overhyped tech cycles. Most jobs won’t be fully automated away.Overlap – Both agree that:Repetitive, low-stakes jobs are most at risk.Businesses often misunderstand AI’s limits.Future resilience requires critical thinking, adaptability, and business strategy, not just technical skills.🔑 Top 3 QuotesChadwick: “This is the first time we’re actually going into the keep of society—the human mind, repetitive processes, thinking capabilities. We’ve never had a technology like that at this scale.”Emmanuel: “AI learns by repetition—it’s good at interpolating, not extrapolating. Without a new methodology, hallucinations and long-tail failures won’t be solved.”Chadwick: “Content isn’t king. Great content is king. Same with software—plenty of tools exist, but only compelling, well-executed ideas will win.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
Pulitzer Prize–winning journalist Gary Rivlin discusses his book AI Valley, exploring Silicon Valley’s AI hype cycle, the dominance of tech giants, and the venture capital forces shaping the industry.Guest BioGary Rivlin is a Pulitzer Prize–winning investigative reporter and author of eleven books, including AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence. He has covered Silicon Valley since the mid-1990s and has written extensively on technology, venture capital, inequality, and politics.Topics DiscussedParallels between the dot-com boom and the AI hype cycleThe explosion of venture capital funding for AI startupsHow media coverage of tech has shifted from hero worship to skepticismWhy only the biggest companies (Microsoft, Google, Meta) can afford large AI modelsThe outsized role of VCs like Marc Andreessen and Reid HoffmanSurveillance capitalism vs. scientific breakthroughs as AI use casesWinners and losers in the AI race, and who benefits financiallyThe risks of hype, inequality, and AI’s impact on jobs and educationMain PointsAI is following the same hype trajectory as the internet in the 1990s, with massive VC money, inflated valuations, and inevitable failures.The cost of AI models (data, chips, talent) locks out small startups, concentrating power in mega-corporations.VCs hype AI doom/utopia narratives to justify billion-dollar bets, while everyday adoption remains slow.AI could bring real benefits in science, medicine, and tutoring, but also risks reinforcing surveillance, bias, and inequality.The likely “winners” are the big tech companies selling both AI products and the “shovels” (cloud/data infrastructure).Top 3 Quotes“Some great things can come from all this money—but a lot of it is going to go up in smoke.”“AI isn’t laser-eyed robots taking over. What we should worry about is surveillance, bias, and the jobs it’s already erasing.”“It’s scary that a small group of technologists, CEOs, and VCs in Silicon Valley are driving AI for the whole world.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
Marketing lecturer & former Fortune 100 exec Melise Panetta discusses how AI is reshaping entry-level jobs, Gen Z’s career prospects, and the future of skills and education.GUEST BIO: Melise Panetta, a lecturer in marketing at Wilfrid Laurier University’s Lazaridis School of Business and Economics and former Fortune 100 executive with over 20 years of global leadership experience, is the founder of Brand U and an expert in consumer behavior, corporate strategy, and preparing the next generation of business leaders.Topics discussed (no timestamps)Descript vs. Final Cut Pro for podcast editing workflowsAI’s disruption of entry-level jobs and internshipsWhich skills are automatable vs. “AI-resistant” (emotional intelligence, critical thinking, ethics)Gen Z’s fears and strategies around entering the workforceWEF jobs report: 92M jobs lost, 170M created, net 78M gainGrowth fields: energy, cybersecurity, engineering, creative strategyCareer planning for Gen Z: choosing majors, skillsets, ROI of degreesOversupply in tech degrees vs. shortage in healthcare/educationOutsourcing vs. AI replacement and global job reshufflingBroader impacts on inequality, branding oneself, and mid-level career developmentMain pointsAI will shrink but not erase entry-level roles; competition will increase.The most at-risk skills are routine, programmable, and repetitive tasks; more resistant skills involve human judgment and collaboration.The real shift is a “reshuffling” of work, with job creation in energy, cybersecurity, and creative strategy.Students must weigh ROI when choosing majors, using labor market trends to guide decisions.Outsourcing and oversupply (especially in tech) may matter more than AI replacement.Gen Z should focus on adaptability, branding, and skill-building to stay competitive.Top 3 quotes“Roles that require skills that are highly automatic, programmable—those are the ones at higher risk. The opposite are what we call AI-resistant skills: emotional intelligence, complex critical thinking, interpersonal collaboration.”“It’s not that jobs are going away—it’s a major reshuffling. Entry-level roles are retracting, while fields like energy production, cybersecurity, and creative design expand.”“Don’t make an $80,000 investment without a very clear idea of what your ROI is going to be coming out of it.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
A deep dive with Dr. Jeffrey Funk on AI hype, startup bubbles, Gen Z’s job struggles, and the broken higher education system.Guest BioDr. Jeffrey Funk is a retired technology economist and former university professor in Japan and Singapore. He specializes in innovation, startup bubbles, and the economic effects of emerging technologies, and is the author of Unicorns, Hype, and Bubbles: A Guide to Spotting, Avoiding, and Exploiting Bubbles in Tech.Topics DiscussedThe hype and financial unsustainability of OpenAI, Anthropic, and cloud providersMicrosoft and Anthropic’s pricing strategies and looming AI bubble collapseGen Z job market struggles, declining college enrollment, and university failuresAI “boosters vs. doomers” vs. skeptics on the “edge of the coin”AI hype, fraud, and legal risks of “AI washing”Why AI fails at coding, medicine, and self-driving carsZero interest rate policy (ZIRP) and its role in fueling startup and AI bubblesThe dead internet theory, bots, and the collapse of online authenticityHigher education’s decline, misplaced incentives, and need for reformMain PointsAI hype is financially unsustainable—companies like OpenAI and Anthropic are pricing their products below cost, subsidizing massive cloud bills.College graduates, especially Gen Z, are struggling in the job market due to declining education quality, reliance on ChatGPT, and employer skepticism.The AI “booster vs. doomer” debate misses the point; most real-world applications are limited, overhyped, and decades away from true impact.Many supposed “AI breakthroughs” (self-driving cars, AI doctors, coding copilots) hide human intervention or show slower results than advertised.Universities focus on publishing papers rather than solving problems, producing entitled graduates unprepared for real-world work.The internet itself is degrading, with bots, fake engagement, and algorithm manipulation creating a hollow online experience.The future belongs to those who solve problems, not those who hype technology.Top 3 Quotes“Altman wants to talk about how everybody uses it—well, everybody uses it because he’s pricing it below cost.”“AI isn’t replacing coders; it’s making them 19% slower because debugging AI’s mistakes takes longer than fixing your own.”“Don’t just talk about problems—solve them. If you focus on solving problems, you will succeed, because most people aren’t.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
An in-depth discussion with legal scholar Jeffrey Seaman debunking popular myths about mass incarceration, examining crime clearance rates, sentencing trends, and exploring justice-focused reforms.Guest bio:Jeffrey Seaman is a Levy Scholar at the University of Pennsylvania Law School, researcher, and co-author of Confronting Failures of Justice. His work focuses on criminal justice policy, sentencing reform, and aligning the system with community standards of justice.Topics discussed:Myths vs. facts about U.S. incarceration ratesThe small role of low-level drug offenders in prison populationsDeclining crime clearance rates and their public safety impactSentencing trends since the 1960s and public opinion on appropriate punishmentRepeat offenders, leniency, and juvenile justice failuresInternational comparisons and moral credibility of the lawPotential of “electronic prison” as a cost-effective alternative to incarcerationBalancing defendants’ rights with victims’ rightsPolitical shifts in crime policy and public opinionHistorical parallels with Prohibition and lessons for modern reformThree best quotes:“The average offender doesn’t feel deterred until they perceive a 30% chance of being caught—and for most crimes, we’re nowhere near that.”“Most people in prison today have had five, ten, even fifteen prior chances; the idea that they’re first-time offenders is a myth.”“If the law gets out of sync with what the community believes is just, you lose moral credibility—and with it, compliance, cooperation, and safety.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
Graham Hillard, editor at the James G. Martin Center for Academic Renewal, discusses the rapid professionalization of college sports under NIL, the legal chaos reshaping athletics, and the uncertain future of the NCAA’s role.Guest bio:Graham Hillard is the editor at the James G. Martin Center for Academic Renewal and a contributing writer for Washington Examiner magazine. He writes on higher education, athletics, and public policy, with a focus on costs, governance, and legal trends.Topics discussed:NIL (Name, Image, Likeness) payments and the House v. NCAA settlementProfessionalization of college football and men’s basketballAntitrust rulings (NCAA v. Alston) and their ripple effectsPotential spinoffs of athletic programs into for-profit entities (e.g., Kentucky model)Title IX implications for revenue sharingEconomic sustainability of non-revenue sportsThe growing role of courts in regulating college athleticsFan experience in the NIL eraPotential super leagues and conference realignmentEmployee status for athletes and possible collective bargainingDonor influence and university politics in athletic decisionsMain points:College football and men’s basketball are moving toward an NFL-style salary cap model, with NIL and direct university payments legalizing player compensation.The NCAA’s authority is eroding, and many governance questions are now being decided in the courts through high-profile lawsuits.Only a small percentage of athletes will significantly benefit from NIL, while most may lose the scholarship-based perks they previously enjoyed.Title IX could require revenue-sharing with women’s sports, creating complex financial and recruiting implications.Schools may eventually split: a “super league” for money sports, and an amateur model for others.Top 3 quotes:“College football has to start where the NFL was in 1930—none of the business rules are in place yet, and it’s the wild west out there.”“We just ruined the whole thing to make 1,000 eighteen-year-olds millionaires, and it wasn’t worth it.”“If we’re going to treat high-dollar college athletes as professionals, then they have to honor their contracts—this fast-and-loose system is not tenable.” 🎙 The Pod is hosted by Jesse Wright💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/📬 Never miss an episode – subscribe and follow wherever you get your podcasts.⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us. Thanks for listening!
loading
Comments