Discover
Bare Knuckles and Brass Tacks
Bare Knuckles and Brass Tacks
Author: BKBT Productions
Subscribed: 5Played: 19Subscribe
Share
© BKBT Production LLC
Description
Bare Knuckles and Brass Tacks is the tech podcast about humans. Hosted by George K and George A, this podcast examines AI, infrastructure, technology adoption, and the broader implications of tech developments through both guest interviews and news commentary.
Our guests bring honest perspectives on what's working, what's broken, and new ways to examine the roles and impacts of technology in our lives.
We challenge conventional tech industry narratives and dig into real-world consequences over hype. Whether you're deeply technical or just trying to understand how technology shapes society, this show will make you think critically about where we're headed and who's getting left behind.
169 Episodes
Reverse
We need to stop treating our data like something to be stored and more like a mission critical supply lines.
Andrew Schoka [https://www.linkedin.com/in/andrew-schoka/] spent his military career in offensive cyber, including stints in the Joint Operations Command and Cyber Command. Now he's building Hardshell to solve a problem most organizations don't even realize they have yet.
Here's the thing: AI is phenomenal at solving problems in places where data is incredibly sensitive. Healthcare, financial services, defense—these are exactly where AI could make the biggest impact. But there's a problem.
Your ML models have a funny habit of remembering training data exactly how it went in. Then regurgitating it. Which is great until it's someone's medical records or financial information or classified intelligence.
Andrew makes a crucial point: organizations still think of data as a byproduct of operations—something that goes into folders and filing cabinets. But with machine learning, data isn't a byproduct anymore. It's a critical supply line operating at speed and scale.
The question isn't whether your models will be targeted. It's whether you're protecting the data they train and interpret like the supply lines they actually are.
Mentioned:
* Destruction of classified tech in downed helicopter during Osama bin Laden raid [https://www.britannica.com/event/Killing-of-Osama-bin-Laden]
Are we sleepwalking into a security crisis that makes ransomware look quaint?
Nuclear security expert Audrey Crowe joins the show to talk about the convergence of grey zone warfare, critical infrastructure, and nuclear security. This isn't your parents' Cold War nuclear threat, this is about adversaries who've figured out they don't need missiles when they can manipulate our infrastructure through cyber operations, disinformation, and coercion that lives in the murky space below armed conflict.
While our adversaries operate in the grey zone with zero institutional friction, democratic nations tie themselves in bureaucratic knots. We demand attribution, legal frameworks, and perfect evidence before we can even acknowledge a threat. It's like showing up to a knife fight with a permission slip.
Audrey walks us through how Stuxnet changed everything, why the nuclear sector spans energy, transportation, healthcare, and government regulation, and why she's on a mission to get nuclear industry stakeholders share more information with one another.
We also get into the elephant in the room: Big Tech's sudden hunger for nuclear power to feed AI data centers. When profit-driven actors start controlling nuclear infrastructure, will safety remain sacred? Or will we sacrifice long-term security for short-term computational power?
What if the real AI revolution isn't about better models—but about unlocking the data we've been sitting on?
Mike McLaughlin [https://www.linkedin.com/in/michael-g-mclaughlin/]—cybersecurity and data privacy attorney, former US Cyber Command—joins us to discuss something most people miss in the AI conversation: we're building the infrastructure for a completely new asset class.
The conversation moves past today's headlines and LLM limitations into what becomes possible when we solve the data access problem:
Research acceleration at unprecedented scale. Imagine biotech startups accessing decades of pharmaceutical failure data, every null result, every experiment that didn't work. That's years cut from development cycles. That's drugs to market faster. That's lives saved.
Universities as innovation accelerators. Right now, research institutions pay to store petabytes of data collecting dust on servers. Mike argues they're sitting on billions in untapped assets to fuel innovation.
Beyond synthetic training. The next generation of AI won't be trained on Reddit threads and scraped websites. It'll be trained on high-quality, provenance-verified research data from institutions that have incentive to participate in the ecosystem.
Mike's vision isn't just about compliance or risk mitigation. It's about creating the conditions for AI to actually deliver on the promise everyone keeps talking about. The compute exists. The capital exists. The models are improving. What we need now is the mechanism to turn decades of institutional research into fuel for the next wave of moonshot innovation.
Mentioned
Google licensing deal with Reddit [https://www.reuters.com/technology/reddit-ai-content-licensing-deal-with-google-sources-say-2024-02-22/]
Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples [https://arxiv.org/abs/2510.07192]
MIT researchers discover new class of antibiotics using machine learning [https://news.mit.edu/2023/using-ai-mit-researchers-identify-antibiotic-candidates-1220]
Reducing bacterial infections from hospital catheters using machine learning [https://www.caltech.edu/about/news/aided-by-ai-new-catheter-design-prevents-bacterial-infections]
When did we stop asking how things work? Rich Greene joins the show to talk about his new podcast Plaintext with Rich [https://open.spotify.com/show/2DCglwZU8zBxzZgy8iHRCa], and we get into something that matters more than any tech: curiosity itself.
Rich spent 20 years in Special Operations before becoming a SANS instructor. Now he's taking complex tech topics and breaking them down for people who need to understand, not just use.
There's a tension building between the tech sector and society at large. AI promises to make everything easier, and maybe it does. But easier can become a trap when it stops us from asking the fundamental questions. When convenience replaces comprehension, we don't just lose technical skill, we lose the ability to think critically about the systems we're building and trusting.
The conversation pushes on a deeper problem: we're creating a generation that believes technology is magic. That you can "vibe code" production software. That prompts replace understanding. And when everything becomes a black box, we've surrendered more than we realize.
Communication and curiosity - those are the skills that matter when the tools change every six months.
Find Plaintext with Rich:
* Spotify [https://open.spotify.com/show/2DCglwZU8zBxzZgy8iHRCa]
* Apple Podcasts [https://podcasts.apple.com/us/podcast/plaintext-with-rich/id1864969176]
* Blog on Medium [https://medium.com/@plaintextwithrich]
Wishing you a very happy and prosperous New Year!
We'll be back in 2026!
It's a holiday week, so turn off this podcast!
But if you'd like to tune in all the same, then we're here to say think you. You, the listeners, have been the greatest gift this season as we've made this turn in our format from security to looking more broadly at the human impact of technology.
You've stuck with us. We've gotten a lot of great messages of support, and we love the direction of the show and love that you love it!
Happy holidays from BKBT! May your time off be peaceful and energizing for the new year.
We're off this week, deep into planning and scheduling for next year. Please enjoy this Best Of episode, originally released in October.
Hannah Storey, Advocacy and Policy Advisor at Amnesty International [https://www.amnesty.org/], joins the show to talk about her new brief that reframes Big Tech monopolies as a human rights crisis, not just a market competition problem.
This isn't about consumer choice or antitrust law. It's about how concentrated market power violates fundamental rights—freedom of expression, privacy, and the right to hold views without interference or manipulation.
Can you make a human rights case against Big Tech? Why civil society needed to stop asking these companies to fix themselves and start demanding structural change. What happens when regulation alone won't work because the companies have massive influence over the regulators?
Is Big Tech actually innovating anymore? Or are they just buying up competition and locking down alternatives? Does scale drive progress, or does it strangle it?
What would real accountability look like? Should companies be required to embed human rights due diligence into product development from the beginning?
Are we making the same mistakes with AI? Why is generative AI rolling forward without anyone asking about water usage for data centers, labor exploitation of data labelers, or discriminatory outcomes?
The goal isn't tweaking the current system—it's building a more diverse internet with actual options and less control by fewer companies.
If you've been tracking Big Tech issues in silos—privacy here, misinformation there, market dominance over here—this episode is an attempt to bring those conversations together in one framework.
Mentioned:
Read more about the Amnesty International report and download the full report here: "Breaking Up with Big Tech: a Human Rights-Based Argument for Tackling Big Tech's Market Power" [https://www.amnesty.org/en/documents/pol30/0226/2025/en/]
Speech AI model helps preserve indigenous languages [https://it-online.co.za/2024/01/22/speech-ai-model-helps-preserve-indigenous-languages]
Empire of AI, [https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/] by Karen Hao
Cory Doctorow's new book, "Enshittification: Why Everything Suddenly Got Worse and What To Do About It" [https://www.versobooks.com/products/3341-enshittification]
2025 was hella weird. The AI revolution is here whether we asked for it or not. This week, George K and George A reflect on the year and what it means for 2026.
At AWS re:Invent, George A watched a machine create a custom fragrance and marketing campaign in real-time from a voice prompt. What does that portend for product prototyping, and scaled manufacturing?
Could voice and natural language finally replacing typing as the primary interface? We're watching the biggest shift in human-computer interaction since the mouse.
Worldwide AI adoption isn't hype anymore—it's happening and doing so unevenly. Some enterprises are getting serious and some are still noodling. The tools are maturing. The question shifted from "if" to "how do we do this responsibly."
There are serious questions to answer. GPU lifecycles. The Magnificent Seven's circular financing models. The human cost of moving this fast. But that's the work—building technology that serves us instead of the other way around.
The revolution came. Now comes the interesting part: what we actually build with it.
2026 is going to be wild. We remain up to the challenge.
Mentioned:
* Brookings Institution, "New data show no AI jobs apocalypse—for now" [https://www.brookings.edu/articles/new-data-show-no-ai-jobs-apocalypse-for-now/]
* Discussed in further detail with Ethan Mollick on Your Undivided Attention [https://open.spotify.com/episode/7tVke0Fuo6WSssgJ4eUDQa?si=UXKBrqKyR2acZqVl3UqXwA]
* Reid Hoffman's interview with Wispr Flow founder/CEO Tanay Kothari [https://open.spotify.com/episode/7AxM51x61saSou9M1GkYwk?si=h7yIQa3fSrWyT2E4B8jYJQ]
* More on Coreweave's financing model at The Verge [https://www.theverge.com/2023/8/8/23824661/coreweave-nvidia-debt-gpu-ai-chips-collateral?utm_source=podcast&utm_id=bareknucklesbrasstacks]
What if your biggest career obstacle isn't external—it's the "broken code" running in your own head?
Rachelle Tanguay joins the show to unpack the difference between consuming self-help content and actually doing the uncomfortable work of rewiring your internal programming.
From advising deputy ministers to coaching professionals across sectors, she's seen what happens when high-performers hit the wall between knowing what to do and actually being able to execute.
This conversation cuts through the dopamine-hit culture of five-minute reels and quick fixes. Rachelle breaks down why most people confuse consuming content with doing the work, how imposter syndrome is not your own voice "chirping in your ear," and why even the most senior leaders need help to see the forest through the trees.
If you've ever wondered why smart people with all the right information still can't break through their own barriers, this episode is for you. No buzzwords, no corporate speak—just an honest look at what it takes to level up when the real bottleneck is you.
Mentioned
https://www.kornferry.com/about-us/press/71percent-of-us-ceos-experience-imposter-syndrome-new-korn-ferry-research-finds
https://www.mogawdat.com/solve-for-happy
https://jamesclear.com/atomic-habits
Graeme Rudd spent years taking emerging technology into austere environments and watching it fail.
He assessed over 350 technologies for a Department of Defense lab. The pattern was consistent: engineers solved technical problems brilliantly. Lawyers checked compliance boxes. Leadership approved budgets. And the technology failed because no one was looking at how humans would actually use it.
So he went to law school—not to practice law, but because solving technology problems requires understanding legal systems, human behavior, operational constraints, and business incentives simultaneously.
Now he works on AI governance, and the stakes are higher. "Ship it and patch later" becomes catastrophic when AI sits on top of your data and can manipulate it. You need engineers, lawyers, operators, and the people who'll actually use the system—especially junior employees who spot failure points leadership never sees—in the room together before you deploy.
This conversation is about why single-discipline thinking fails when technology intersects with humans under stress.
Why pre-mortems with your most junior people matter more than post-mortems with experts.
Why the multidisciplinary approach isn't just nice to have—it's the only way to answer the question that matters:
Does it work when a human being needs to operate it under conditions you didn't anticipate?
What happens when you go all in and bet on yourself?
Taylor McClatchie, professional Muay Thai fighter with ONE Championship, joins the show to share how she did just that.
She spent a decade in reproductive science, working in a lab. Then she walked away from it all to turn her pastime into her profession. Went 20-0 as an amateur. Made her pro debut at Madison Square Garden with a head kick knockout. Has competed 65 times—exceptionally rare for a North American woman in combat sports.
This episode isn't about technology. It's about what happens when you stop following the prescribed steps and start building a life around what actually matters to you.
Taylor didn't fall in love with winning. She fell in love with the process. With adding one more piece to training camp—sprints, nutrition coaching, strength work—and never taking them away. With waking up and doing it again.
She talks about needing three types of sparring partners: people worse than you to test new skills, people at your level to compete with, and people better than you just to survive the round. "I never want to be the best person in the room because what am I getting from beating up on the new kids?"
The parallel to our industry is unavoidable. You can't grow if you're always punching down. You need to be uncomfortable. You need rounds where you're just trying to survive.
We spend a lot of time on this show questioning whether technology actually serves human interests. Sometimes the best lessons come from outside our world entirely—from someone willing to abandon the expected path to pursue something real.
Eric Pilkington joins the show to cut through the noise around artificial intelligence and deliver some hard truths about what's actually working—and what's just expensive theater.
AI isn't new; it's been around for 70+ years. The current generative AI boom is democratization, not innovation—and 95% of AI projects are still failing.
Startups with no product, no customers, and no revenue raising $30-100 million. Companies are getting massive funding without a single dollar of revenue.
The real AI leaders aren't the loudest voices on conference stages. They're the ones quietly embedding AI into workflows, building better products, and closing the gap between pilots and actual impact.
Most companies chase cost savings instead of using AI to drive top-line growth. You can't cut your way to growth. Real business transformation comes from understanding the actual problems you're solving, not from chasing the newest shiny object. The superheroes of AI aren't prognosticating on stages—they're in garages and labs building things that'll matter five years from now.
Mentioned:
MIT Study on failure of AI pilots in business [https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf]
Hunter Grad, CEO and founder of Ameliogenix [https://ameliogenix.ca/], joins the show to talk about developing mRNA immunotherapies for cardiovascular disease. George K and George A sit down with Hunter to discuss:
* How a procrastinated university project turned into a biotech startup tackling the leading cause of death worldwide
* The novel application of mRNA technology to permanently reduce cholesterol levels through targeting proteins within the body rather than viral diseases
* What it takes to bootstrap a biotech company in Ottawa, not Silicon Valley
* The brutal realities of fundraising in biotech versus software startups, and why pivoting isn't always an option when lives are on the line
* Clearing up the myths and misinformation around mRNA technology, from how it actually works to addressing fertility concerns
* The role of machine learning in accelerating biotech research and drug discovery, and why quality data matters more than flashy AI hype
Hunter breaks down complex immunology concepts into digestible explanations while sharing the raw challenges of being a young founder in a traditionally academic-led industry. This episode explores innovation at the intersection of technology and medicine, the importance of rigorous science over buzzwords, and what it means to swing for the fences on a problem that affects 2 billion people worldwide.
Mentioned:
Using AI, MIT researchers identify a new class of antibiotic candidates [https://news.mit.edu/2023/using-ai-mit-researchers-identify-antibiotic-candidates-1220]
Hannah Storey, Advocacy and Policy Advisor at Amnesty International [https://www.amnesty.org], joins the show to talk about her new brief that reframes Big Tech monopolies as a human rights crisis, not just a market competition problem.
This isn't about consumer choice or antitrust law. It's about how concentrated market power violates fundamental rights—freedom of expression, privacy, and the right to hold views without interference or manipulation.
Can you make a human rights case against Big Tech? Why civil society needed to stop asking these companies to fix themselves and start demanding structural change. What happens when regulation alone won't work because the companies have massive influence over the regulators?
Is Big Tech actually innovating anymore? Or are they just buying up competition and locking down alternatives? Does scale drive progress, or does it strangle it?
What would real accountability look like? Should companies be required to embed human rights due diligence into product development from the beginning?
Are we making the same mistakes with AI? Why is generative AI rolling forward without anyone asking about water usage for data centers, labor exploitation of data labelers, or discriminatory outcomes?
The goal isn't tweaking the current system—it's building a more diverse internet with actual options and less control by fewer companies.
If you've been tracking Big Tech issues in silos—privacy here, misinformation there, market dominance over here—this episode is an attempt to bring those conversations together in one framework.
Mentioned:
Read more about the Amnesty International report and download the full report here: "Breaking Up with Big Tech: a Human Rights-Based Argument for Tackling Big Tech's Market Power" [https://www.amnesty.org/en/documents/pol30/0226/2025/en/]
Speech AI model helps preserve indigenous languages [https://it-online.co.za/2024/01/22/speech-ai-model-helps-preserve-indigenous-languages]
Empire of AI, [https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/] by Karen Hao
Cory Doctorow's new book, "Enshittification: Why Everything Suddenly Got Worse and What To Do About It" [https://www.versobooks.com/products/3341-enshittification]
Clinical psychologist, Dr. Sarah Adler, joins the show this week to talk about why "AI Therapy" doesn't exist, but is bullish on what AI can help therapists achieve.
Dr. Adler is a clinical psychologist and CEO of Wave [https://www.wavelife.io/]. She's building AI tools for mental healthcare, which makes her position clear—what's being sold as "AI therapy" right now is dangerous.
Chatbots are optimized to keep conversations going. Therapy is designed to build skills within bounded timeframes. Engagement is not therapy. Instead, Dr. Adler sees AI as a powerful recommendation engine and measurement tool, not as a therapist.
George K and George A talk to Dr. Adler about what Ethical AI looks like, the model architecture for personalized care, who bears responsibility and liability, and more.
The goal isn't replacing human therapists. It's precision routing—matching people to the right care pathway at the right time. But proving this works requires years of rigorous study. Controlled trials, multiple populations, long-term tracking. That research hasn't been done.
Dr. Adler also provides considerations and litmus tests you can use to discern snake oil from real care.
Mental healthcare needs innovation. But you cannot move fast and break things when it comes to human lives.
Mentioned:
A Theory of Zoom Fatigue [https://theconvivialsociety.substack.com/p/a-theory-of-zoom-fatigue]
Kashmir Hill's detailed reporting on Adam Raine's death and the part played by ChatGPT [https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html?unlocked_article_code=1.nU8.DE_a.Ur81NxfjZuNn&smid=url-share]
(Warning: detailed discussion of suicide)
Colorado parents sue Character AI over daughter's suicide [https://www.cbsnews.com/colorado/news/lawsuit-characterai-chatbot-colorado-suicide/]
Sewell Setzer's parents sue Character AI [https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0]
Deloitte to pay money back after caught using AI in $440,000 report [https://www.theguardian.com/australia-news/2025/oct/06/deloitte-to-pay-money-back-to-albanese-government-after-using-ai-in-440000-report]
This week George K and George A switch formats to tackle the AI revolution's messiest questions—from autonomous coding agents to digital actresses and deepfake scams.
The hosts examine what happens when innovation moves faster than ethics. When Claude Sonnet 4.5 promises 30 hours of autonomous coding, what's the real trade-off between productivity gains and security fundamentals? When talent agencies want to represent AI-generated actresses, are we witnessing the death of human performance art or just another moral panic? And when Brazilian scammers can steal millions in $19 increments using celebrity deepfakes, who bears responsibility—the platforms, the regulators, or the users?
They explore the uncomfortable economics behind AI video generation, where companies promised to cure cancer but instead delivered infinite dopamine-mining slop. The conversation digs into data center energy consumption, the exploitation of human attention, and why your grandmother clicking Facebook ads might represent democracy's newest vulnerability.
George A brings a practitioner's lens to AI governance, arguing for education from elementary school up, metadata standards for content authenticity, and balanced regulation that protects innovation without enabling exploitation. George K challenges the fundamental premise: if supercomputers are being pointed at our dopamine receptors just to sell more ads, what happened to building technology that actually improves human life?
Most importantly, they ask: Are we building applications that create a better future, or are we just doubling down on the attention economy?
News examined:
* Anthropic releases Claude Sonnet 4.5 in latest bid for AI agents [https://www.theverge.com/ai-artificial-intelligence/787524/anthropic-releases-claude-sonnet-4-5-in-latest-bid-for-ai-agents-and-coding-supremacy]
* Emily Blunt among Hollywood stars outraged over 'AI actor' Tilly Norwood [https://www.bbc.com/news/articles/c99glvn5870o]
* AI: Meta, Google & OpenAI lean into AI Generated Social Videos [https://michaelparekh.substack.com/p/ai-meta-google-and-openai-lean-into]
* Brazilian scammers, raking in millions, used Gisele Bundchen deepfakes on Instagram ads
[https://www.reuters.com/world/americas/brazilian-scammers-raking-millions-used-gisele-bundchen-deepfakes-instagram-ads-2025-10-03/]
The lads are traveling this week, so we're revisiting their interview with Savannah Sly, dominatrix and sex worker rights advocate. She joined the show to talk about privacy, power, and the nuances of human intimacy as generative AI takes hold.
George K and George A talk to Savannah about:
* The current state of privacy for vulnerable communities and the real-world operational security challenges they face
* Practical steps individuals can take to protect their digital identities when dating online
* The intersection of AI, deepfakes, and the weaponization of intimate content
* The zeitgeist and cultural headwinds for sex workers today
This week George K and George A switch formats to consider the deeper questions behind recent tech headlines.
The hosts dig into the philosophical tensions driving today's biggest tech stories. When does technological dependency become too dangerous to ignore? How do we distinguish between genuine innovation and elaborate pump-and-dump schemes dressed up as progress? What are the real costs when entire economies become intertwined with a handful of companies?
They explore whether we're witnessing the early stages of a historic bubble or if we're already past the point of no return. The conversation touches on the ethics of deploying untested technology on vulnerable populations, the normalization of surveillance capitalism, and why regulatory capture might be democracy's biggest threat.
Most importantly, they ask the question that should keep every technologist awake at night: Are we building the future we actually want to live in, or are we just building the future that's most profitable for a few?
The news examined:
* Details emerge on the US' TikTok deal with China [https://www.wsj.com/tech/details-emerge-on-u-s-china-tiktok-deal-594e009f?reflink=desktopwebshare_permalink]
* Things just got worse for Nvidia in China [https://www.bbc.com/news/articles/cqxz29pe1v0o]
* To protect underage users, ChatGPT may ask for ID [https://www.theguardian.com/technology/2025/sep/17/chatgpt-developing-age-verification-system-to-identify-under-18-users-after-teen-death]
* Meta's smart glasses get smarter [https://www.readthepeak.com/stories/09-25-meta-s-smart-glasses-get-smarter]
Mentioned in the discussion:
* MIT report: The GenAI Divide STATE OF AI IN BUSINESS 2025 [https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf]
* Ed Zitron's podcast, Better Offline, and newsletter analysis of "Magnificent Seven" companies [https://www.wheresyoured.at/the-haters-gui/#the-magnificent-7s-ai-story-is-flawed-with-560-billion-of-capex-between-2024-and-2025-leading-to-35-billion-of-revenue-and-no-profit]
* Kashmir Hill's detailed reporting on Adam Raine's death and the part played by ChatGPT [https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html?unlocked_article_code=1.nU8.DE_a.Ur81NxfjZuNn&smid=url-share](Warning: detailed discussion of suicide)
* Meta's leaked policy on allowing chatbots to engage in "sensual" chats with children [https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines]
Phil Dursey joined the show this week to cut through the hype and talked through what red teaming for AI means in mindset and practice.
The conversation reveals a fundamental problem: organizations are rushing to implement AI without understanding their own workflows. Executives are buying "the thing with the AI" expecting magic efficiency gains, but they've never mapped out basic business processes. You can't automate what you don't understand.
Phil's approach starts with the right question: "Are we using the right tool for the use case?"
We also talked about education and kids. Find out why Phil argues philosophy and humanities give you the biggest advantage when working with AI systems. It's what he looks for in hiring, too. The ability to formulate good questions, understand context, and think clearly matters more than technical prowess.
And finally we touch on the job market. We're heading toward AI capabilities that will exceed human professionals in specific domains. The displacement won't be overnight, but it's coming.
If you're implementing AI in your organization, this episode should make you pause and ask harder questions. The technology is powerful, but power without thoughtful application is just expensive chaos.
Mentioned:
* Phil Dursey's guide, Red Teaming AI [https://nostarch.com/red-teaming-AI]
* Hard Fork podcast segment on a student's AI workflow [https://youtu.be/X-KzyPRdcmc?feature=shared&t=3414]
A stalkerware economy is thriving on TikTok, and it's generating hundreds of thousands in sales. Journalist Rosie Thomas from 404 Media joins the show this week discuss her investigation into GPS trackers being sold as relationship surveillance tools directly through TikTok Shop. This isn't some dark web operation - it's happening on one of the world's most popular social platforms.
The findings are disturbing. Content targets people with taglines like "Is she really going out with friends?" are generating hundreds of thousands in sales. Algorithms don't just show you this content - they amplify it the moment you engage.
The digital economy we all live in has normalized surveillance to the point where stalking your partner is being marketed as a reasonable relationship tool. The technology isn't new, but the accessibility and algorithmic amplification absolutely is.
This conversation touches on everything from the failure of tech companies to consider abuse cases in product design, to how parasocial relationships are replacing actual community bonds, to the legal gaps that leave victims with limited recourse.
If you work in tech, this episode should make you uncomfortable. As a citizen, it should terrify you. It's a reminder that our biggest threats often come from the normalization of our culture's worst tendencies.
Read more on 404Media: https://www.404media.co/tiktok-shop-sells-viral-gps-trackers-marketed-to-stalkers/























