E84 - AI Drama | Brazil's Lesbian Dating App Disaster: AI Security Flaw
Description
š§ Listen now:
Spotify:
https://open.spotify.com/episode/249ZA6nHHoKmaiGYqY6Jum?si=91mGWjWJT-ur14At1KWpjA
Apple Podcast
https://podcasts.apple.com/at/podcast/brazils-lesbian-dating-app-disaster-ai-security-flaw/id1846704120?i=1000732455609
š Description
Marina thought she finally found safety.
A lesbian dating app in Brazil ā built by queer women, for queer women.
Manual verification. No fake profiles. No men.
Then everything went wrong.
In September 2025, Sapphos launched as a sanctuary with government-ID checks.
Within 48 hours, 40,000 women downloaded it.
A week later, a catastrophic flaw exposed the most sensitive data of 17,000 users ā IDs, photos, names, birthdays.
š One researcher discovered he could view anyoneās profile just by changing a number in a URL.
Thatās how fast āsafetyā can vanish when speed beats security.
š§ What This Episode Covers
This episode of AI Drama investigates how AI-generated code, underqualified devs, and āvibe codingā collided with a vulnerable community.
Itās not a takedown of two activists ā itās a warning about asking for extreme trust without professional security.
š Youāll Learn
- How a single IDOR-style bug leaked government IDs and photos
- Why AI-generated code often ships with hidden flaws
- The unique threats LGBTQ+ apps face in high-violence regions
- What happened after the founders deleted evidence of the breach
- How to spot red flags before uploading your ID anywhere
ā ļø The Real Stakes
š§š· Brazil remains one of the most dangerous countries for LGBTQ+ people.
Lesbian and bisexual women face three times higher rates of violence than straight women.
For many Sapphos users, being outed wasnāt embarrassing ā it was life-threatening.
š§© What Went Wrong
- Identity checks increased trust ā but concentrated risk
- When one app collects IDs, selfies, and locations, a single bug exposes everything
- AI sped up insecure coding ā ~45 % of AI-generated code has vulnerabilities
- No audits, no penetration tests, poor access control
- Logs deleted ā evidence erased
- Communication failed: instead of transparency, users saw silence and denial
šØ Red Flags Before Trusting an App
ā Verified security audits (SOC 2 / ISO 27001)
ā Transparent privacy policy + deletion options
ā Minimal data collection ā no unnecessary IDs
ā Public security contact or bug-bounty page
ā Experienced, visible founding team
ā Avoid apps claiming ā100 % secureā or ācompletely privateā
š”ļø Safer Habits
š Use unique emails + a password manager
šµļø Prefer privacy-preserving verification methods
š Turn off precise location & strip photo metadata
š After any breach: change credentials, rotate IDs if possible, monitor credit
š¬ Notable Quotes
āMarinaās only āmistakeā was trusting people who promised protection.ā
āThe lesson isnāt donāt build ā itās donāt build insecure. Demand proof, not promises.ā
š Select Facts
- ~45 % of AI-generated code shows security flaws
- LGBTQ+ users face more online harassment
- Brazil records one LGBTQ+ person killed every ~48 hours
šļø AI Drama is a narrative-journalism podcast about the human cost when technology fails those who trust it most.
Hosted by Malcolm Werchota.
š SEO Keywords
dating-app breach ⢠LGBTQ privacy ⢠Brazil ⢠ID verification ⢠AI code security ⢠queer safety