DiscoverThe IT Privacy and Security Weekly Update.
The IT Privacy and Security Weekly Update.
Claim Ownership

The IT Privacy and Security Weekly Update.

Author: R. Prescott Stearns Jr.

Subscribed: 24Played: 447
Share

Description

Into year seven for this award-winning, light-hearted, lightweight IT privacy and security podcast that spans the globe in terms of issues covered, with topics that draw in everyone from executive to newbie, to tech specialist.

Your investment of between 15 and 20 minutes a week will bring you up to speed on half a dozen current IT privacy and security stories from around the world to help you improve the management of your own privacy and security.
357 Episodes
Reverse
This week’s deep dive explores a powerful theme shaping the modern threat landscape: invisible signals. From the devices we wear and drive to the AI systems we increasingly rely on, our technology is constantly emitting data — sometimes to protect us, sometimes to expose us.We begin with a new Android app called Nearby Glasses, designed to alert users when smart glasses like Meta’s Ray-Bans are detected nearby via Bluetooth manufacturer identifiers. It’s a citizen-built countermeasure to always-on wearable cameras, highlighting rising tensions between convenience and consent in public spaces.Next, we examine research showing that tire pressure monitoring systems (TPMS), mandatory in U.S. vehicles since 2007, broadcast unencrypted, persistent identifiers. Researchers captured millions of signals and demonstrated how vehicles can be passively tracked using inexpensive radio equipment. No hacking required — just poorly designed IoT architecture turning cars into rolling beacons.From physical signals to digital footprints, a new study reveals that AI can deanonymize social media users by correlating small details across platforms. What once required nation-state resources can now be done with commodity large language models, fundamentally challenging the concept of online anonymity.We then dive into the “Truman Show” investment scam — a sophisticated fraud operation that uses AI-generated personas, fake group chats, fabricated media coverage, and sham trading apps to create a fully immersive illusion of legitimacy. Rather than stealing trust directly, scammers now manufacture entire digital realities where trust feels inevitable.AI agents themselves are also reshaping security assumptions. Modern assistants can access files, write code, and interact with online services using a user’s privileges. Researchers warn that prompt injection attacks — hidden malicious instructions embedded in content — can manipulate these agents into leaking data or performing harmful actions. When AI combines sensitive access, untrusted input, and outbound communication, it becomes a new form of insider risk.That risk was underscored by the OpenClaw vulnerability, which allowed malicious web pages to brute-force a local AI agent gateway and potentially hijack it. The lesson: “local” no longer means secure. Any system with elevated privileges must be treated as a governed identity.On the defensive side, AI is accelerating security improvements. Anthropic used a large language model to analyze Firefox’s codebase, identifying over 100 flaws in two weeks, including 22 confirmed security bugs. AI is compressing months of review into days — but the same acceleration applies to attackers.Finally, Operation Candy in Sweden demonstrates how digital evidence can unravel vast criminal networks. Two seized phones exposed an international drug and money laundering operation spanning multiple continents, proving that even small data points can collapse large hidden systems.Zooming out, the pattern is clear: wearables broadcast presence, cars broadcast identity, AI strips away anonymity, scams construct synthetic realities, assistants act autonomously, and devices quietly record history. Signals are everywhere — visible and invisible — and AI is amplifying their impact.The question is no longer whether your technology emits signals. It’s who is listening — and whether they’re protecting you or profiling you.
Ep 282 This week technology gets personal - whether you like it or not.In this update:- A new app that tells you if someone nearby is wearing smart glasses.- Your car’s tire pressure sensors silently broadcasting your movements.- AI that can unmask anonymous social media accounts.- A full “Truman Show” investment scam powered by artificial intelligence.- AI assistants quietly reshaping the cybersecurity threat model.- A vulnerability that let websites hijack local AI agents.- AI finding high-severity bugs in Firefox faster than human teams.- And two seized phones in rural Sweden that unraveled a global crime empire.The thread connecting all of them?  Invisible signals.Some are protecting you.  Some are exposing you.All of them are accelerating.Let’s dive in.Find the Full transcript here.
IntroductionWelcome back to: At war. The Deep Dive: With the IT Privacy and Security Weekly Update for March 3rd. 2026. episode 281. The podcast that makes sense of the week's most important technology and cybersecurity stories, without assuming you have a computer science degree.This week we have eight stories spanning AI gone wrong, AI used in warfare, a historic security milestone from Apple, and a new kind of AI agent that's making seasoned security professionals nervous. Let's get into it.
episode 281. This week's update that makes sense of the week's most important technology and cybersecurity stories, without assuming you have a computer science degree.This week we have eight stories spanning AI gone wrong, AI used in warfare, a historic security milestone from Apple, and a new kind of AI agent that's making seasoned security professionals nervous. Let's get into it.Find the full transcript to the podcast here.
These sources collectively examine the evolving landscape of digital threats and the vulnerabilities inherent in modern technology. They detail sophisticated cyber-as-a-service schemes like Starkiller, which bypasses traditional security, alongside physical risks such as directed-energy research and privacy flaws in household robotics. The reports also highlight how artificial intelligence is simultaneously streamlining security labor while introducing new risks through predictable password generation and autonomous system access. Corporate and state-level issues are addressed through data breaches at PayPal, legal scrutiny of TP-Link's supply chain, and the critical role of open-source infrastructure. Ultimately, the text emphasizes that while automated tools and password managers are essential, they require proactive user management and independent verification to remain effective. Consistent software updates and skeptical browsing habits are presented as the primary defenses against these diverse global challenges.
EP 280.  “When Your Everyday Tech Quietly Turns Against You” “This week, a hobby project turned one man’s robot vacuum into a remote control for 7,000 homes, a new phishing service made the real login page your biggest enemy, and Texas decided your Wi‑Fi router is now a geopolitical issue.”Set the promise:“If you’re not technical but you live with passwords, smart gadgets, or online banking, this episode is about the invisible ways those tools can misbehave, and the one small fix you can make after each story.”
We open with China’s 8.7 billion-record megaleak, framing misconfigured infrastructure as a planetary-scale risk rather than a local breach. Lenovo’s U.S. class action then shows how invisible web trackers can quietly “spill” American browsing data to China, while South Korea’s heavy fines against Louis Vuitton, Dior, and Tiffany illustrate that even luxury brands now pay real money when they mishandle customer information.The focus then narrows to individuals: a 17.5M-user Instagram dataset on underground forums, malicious GenAI Chrome extensions posing as helpers while siphoning data, and a decade-old Apple zero-day likely leveraged by commercial spyware all demonstrate how ordinary accounts and devices can become rich sources of exploitable data. Together they highlight a world where “just contact details,” browser add-ons, and long-lived bugs can escalate into serious compromise.From there, the update shifts into ambient surveillance and manipulation: Meta’s planned facial-recognition “Name Tag” for Ray-Ban smart glasses pushes identification into public spaces and raises new concerns about children and bystanders, while AI-saturated products from Google, Meta, and others quietly convert intimate conversations and searches into highly targeted ad fuel. It closes with a Shakespeare quote about guilt “spilling” itself and a sign-off urging listeners to “pour with a steady hand,” tying the spill metaphor back to handling data, tools, and trust more carefully in everyday digital life.
EP279. This week's update spills on a global scale.  We start with...A single misconfigured database just turned 8.7 billion Chinese records into a global reminder that at planetary scale, data protection failures stop being “incidents” and start looking like infrastructure risks.A new class action against Lenovo puts a spotlight on how invisible trackers and cross-border data flows can turn an ordinary website visit into a quiet export of American browsing habits to China.When Louis Vuitton, Dior, and Tiffany rack up multimillion-dollar privacy fines in South Korea, it sends a clear message: even the most glamorous brands pay dearly when customer data is treated carelessly.The Instagram dataset circulating on underground forums shows how a trove of “just usernames and contact details” can still supercharge scams, phishing, and harassment at massive scale.Dozens of AI-branded Chrome extensions masquerading as helpful assistants reveal how attackers now weaponize the GenAI buzz to sneak data exfiltration straight into your browser.Apple’s fix for a ten-year-old iOS and macOS zero-day pulls back the curtain on a long-running hole likely exploited by commercial spyware against some of the world’s most high-value targets.Metas planned facial recognition for Ray-Ban smart glasses pushes the privacy debate from your screen to the street, raising uncomfortable questions about who gets to be identified, by whom, and when.The rush to embed AI into every digital interaction is quietly reshaping advertising, turning your casual chats and searches into some of the richest targeting data the tech giants have ever seen.Grab a towel and let's check the spill.
A mix of escalating geopolitical cyber risks, the changing landscape of defensive security, and a series of high-profile incidents demonstrating the enduring threat of human-driven flaws.Cyber Espionage and Geopolitics:A year-long, sprawling espionage campaign by a state-backed actor (TGR-STA-1030) compromised government and critical infrastructure networks in 37 countries, utilizing phishing and unpatched security flaws, and deploying stealth tools like the ShadowGuard Linux rootkit to collect sensitive emails, financial records, and military details. Simultaneously, the threat environment has extended to orbit, where Russian space vehicles, Luch-1 and Luch-2, have been reported to have intercepted the communications of at least a dozen key European geostationary satellites, prompting concerns over data compromise and potential trajectory manipulation.AI and Security:AI has entered a new chapter in defensive security as Anthropic’s Claude Opus 4.6 model autonomously discovered over 500 previously unknown, high-severity security flaws (zero-days) in widely used open-source software, including GhostScript and OpenSC. This demonstrates AI's rapid potential to become a primary tool for vulnerability discovery. On the cautionary side, the highly publicized Moltbook, a social network supposedly run by self-aware AI bots, was revealed as a masterclass in security failure and human manipulation. Cybersecurity researchers uncovered a misconfigured database that exposed 1.5 million API keys and 35,000 human email addresses, and found that the dramatic bot behavior was largely orchestrated by 17,000 human operators running bot fleets for spam and coordinated campaigns.Automotive Security and Autonomy:New US federal rules are forcing a major, complex shift in the automotive supply chain, requiring carmakers to remove Chinese-made software from connected vehicles before a 2026 deadline due to national security concerns. This move is redefining what "domestic technology" means in critical industries. In a related development, Waymo's testimony revealed that when its "driverless" cars encounter confusing situations, they communicate with remote assistance operators, some based in the Philippines, for guidance—a disclosure that immediately raised lawmaker concerns about safety, cybersecurity vulnerabilities from remote access, and the labor implications of overseas staff influencing US vehicles.Insider Threat and Legal Lessons:The importance of the security principle of "least privilege" was highlighted by an insider incident at Coinbase, where a contractor with too much access improperly viewed the personal and transaction data of approximately 30 customers. This incident reinforces that the highest risk often comes not from external nation-state hackers, but from overprivileged internal humans. Finally, two security researchers arrested in 2019 for an authorized physical and cyber penetration test of an Iowa courthouse settled their civil lawsuit with the county for $600,000. However, the county attorney's subsequent warning that any future similar tests would be prosecuted delivers a chilling message to the security testing community about legal risks even when work is authorized.
Episode 278 In this week's global update:A sprawling, year-long espionage campaign quietly turned government networks in 37 countries into a global listening post for a still-unattributed state-backed actor.Russian inspector spacecraft are no longer just loitering in orbit, they are now close enough to eavesdrop on, and potentially tamper with, Europe’s most critical communications satellites.Anthropic’s latest AI model has kicked off a new chapter in defensive security by autonomously uncovering hundreds of serious flaws hiding in widely used open-source software.Moltbook promised a glimpse of a self-aware bot society, but instead became a masterclass in hype, human puppeteers, and painfully bad security hygiene.Under sweeping new federal rules, US automakers are racing to surgically remove Chinese software from connected vehicles before geopolitical risk collides with the modern car’s codebase.Waymo’s testimony revealed that when its driverless cars get confused, the call for help may be answered half a world away, raising new questions about safety, sovereignty, and accountability.Years after being jailed mid-engagement, two Iowa courthouse pentesters have finally won a six-figure settlement, alongside a chilling warning that future testers may not be so lucky.Coinbase’s latest insider incident is a particularly pointed reminder that the real damage often comes not from nation-state hackers, but from overprivileged humans already inside the system.Let's hit it!Find a full transcript to this week's podcast here.
By early 2026, AI’s role has split into a clear paradox: consumers increasingly reject it in everyday search, while critical systems lean on it to uncover deep flaws and decode complex biology. AI is shunned as a source of noisy, untrusted summaries, yet embraced as an indispensable auditor of legacy code and genomic “dark matter,” where systems like AISLE and AlphaGenome expose decades-old vulnerabilities and illuminate non-coding DNA’s influence on disease.At the same time, trust in digital protectors and platforms is eroding as security tools and communication services themselves become vectors of risk. The eScan incident shows how a compromised update server can turn antivirus into malware distribution, while “Operation Sourced Encryption” suggests that end-to-end encryption can be weakened not by breaking cryptography, but by exploiting moderation workflows and access policies.Espionage now blends human and digital weaknesses, with the Nobel leak likely driven by poor institutional OpSec and Google’s insider theft case revealing how easily high-value AI IP can walk out the door when procedural safeguards lag. Both episodes underline that advanced technical controls mean little if basic governance, identity checks, and behavioral monitoring are neglected.Consumer-facing privacy illustrates an equally stark divide between negligent design and proactive protection. Bondu’s AI toy breach, exposing tens of thousands of children’s intimate chats via an essentially open portal, embodies “privacy as afterthought,” whereas Apple’s iOS location fuzzing shows “privacy by architecture,” making fine-grained tracking technically difficult rather than merely contractually prohibited.Taken together, these threads define 2026 as a pivot year: AI is maturing into a high-stakes auditing tool just as faith in trusted vendors collapses, pushing organizations toward Zero Trust models where security and privacy are enforced by design and cryptography instead of marketing, policies, or reputation.
EP 277In this week’s dark matter:Privacy-first users send a clear message to DuckDuckGo.  AI-free search is here to stay for most of its community.A cutting-edge AI from AISLE exposed deep-seated vulnerabilities in OpenSSL, exponentially speeding the pace of cybersecurity discovery.A security breach at eScan transformed trusted antivirus software into an unexpected cyber weapon.An internal probe suggests a cyber intrusion may have prematurely exposed last year’s Nobel Peace Prize laureate.A U.S. jury found former Google engineer Linwei Ding guilty of funneling AI trade secrets to Chinese tech companies.Newly surfaced records reveal U.S. investigators examined claims that WhatsApp's encryption might not be as airtight as advertised.Apple's new location “fuzzing” feature gives users the power to stay connected, without being precisely tracked.A privacy lapse in a talking AI toy exposed thousands of private conversations between children and their plush companions.Google unleashes new AI to investigate DNA’s ‘dark matter’.  DeepMind’s latest creation, AlphaGenome, is shining light on the 98% of DNA that science once found inscrutable.Come on, let’s go unravel some genomes.Find the full transcript to this podcast here.
In 2026, digital privacy and security reflect a global power struggle among governments, corporations, and infrastructure providers. Encryption, once seen as absolute, is now conditional as regulators and companies find ways around it. Reports that Meta can bypass WhatsApp’s end-to-end encryption and Ireland’s new lawful interception rules illustrate a growing tolerance for backdoors, risking weaker international standards. Meanwhile, data collection grows deeper: TikTok reportedly tracks GPS, AI-interaction metadata, and cross‑platform behavior, leaving frameworks like OWASP as the final defense against mass exploitation.Cyber risk is shifting from isolated vulnerabilities to structural flaws. The OWASP Top 10 for 2025–26 shows that old problems—access control failures, misconfigurations, weak cryptography, and insecure design—remain endemic. Supply-chain insecurity, epitomized by the “PackageGate” (Shai‑Hulud) flaw in JavaScript ecosystems, demonstrates that inconsistent patching and poor governance expose developers system‑wide. Physical systems are no safer: at Pwn2Own Automotive 2026, researchers proved that electric vehicle chargers and infotainment systems can be hacked en masse, making charging a car risky in the same way as connecting to public Wi‑Fi. The lack of hardware‑rooted trust and sandboxing standards leaves even critical infrastructure vulnerable.Corporate and national sovereignty concerns are converging around what some call “digital liberation.” The alleged 1.4‑terabyte Nike breach by the “World Leaks” ransomware group shows how centralization magnifies damage—large, unified data stores become single points of catastrophic failure. In response, the EU’s proposed Cloud and AI Development Act aims to build technological independence by funding open, auditable, and locally governed systems. Procurement rules are turning into tools of geopolitical self‑protection. For individuals, reliance on cloud continuity carries personal risks: in one case, a University of Cologne professor lost years of AI‑assisted research after a privacy setting change deleted key files, revealing that even privacy mechanisms can erase digital memory without backup.At the technological frontier, risk extends beyond IT. Ethics, aerospace engineering, and sustainability intersect in new fault lines. Anthropic’s “constitutional AI” reframes alignment as a psychological concept, incorporating principles of self‑understanding and empathy—but critics warn this blurs science and philosophy. NASA’s decision to modify, rather than redesign, the Orion capsule’s heat shield for Artemis II—despite earlier erosion on Artemis I—has raised fears of “normalization of deviance,” where deadlines outweigh risk discipline. Beyond Earth, environmental data show nearly half of the world’s largest cities already face severe water stress, exposing the intertwined fragility of digital, physical, and ecological systems.Across these issues, a shared theme emerges: sustainable security now depends not just on technical patches but on redefining how society manages data permanence, institutional transparency, and the planetary limits of infrastructure. The boundary between online safety, physical resilience, and environmental stability is dissolving—revealing that long‑term survival may rest less on innovation itself and more on rebuilding trust across the systems that sustain it.
EP 276. In this week's update:Ireland has enacted sweeping new lawful interception powers, granting law enforcement expanded access to encrypted communications and raising fresh concerns among privacy advocates and tech companies.TikTok’s latest U.S. privacy policy update expands location tracking, AI interaction logging, and cross-platform ad targeting, marking a significant escalation in data collection under its new American ownership structure.The newly released OWASP Top 10 (2025 edition) highlights the most critical web application security risks, providing developers and organizations with an updated roadmap to prioritize defenses against evolving threats.Security researchers have uncovered a critical bypass in NPM’s post-Shai-Hulud supply-chain protections, allowing malicious code execution via Git dependencies in multiple JavaScript package managers.As Artemis II approaches, NASA defends the Orion spacecraft’s unchanged heat shield design despite persistent cracking concerns from its uncrewed predecessor, while some former engineers warn the risk remains unacceptably high.Anthropic has significantly revised Claude’s governing “constitution,” shifting from strict rules to high-level ethical principles while explicitly addressing the hypothetical possibility of AI consciousness and moral status.The European Parliament has adopted a strongly worded resolution urging the EU to reduce strategic dependence on American tech giants through aggressive investment in sovereign cloud, AI, and open digital infrastructure.This one's a good'n.  Let's get to it!Find the full transcript here.
Unsecured Flock Safety Condor cameras were found livestreaming on the internet without passwords or encryption. The flaw exposed at least 60 cameras, allowing public access to feeds, downloads, and administrative controls. The researchers who disclosed the vulnerability reported facing police surveillance and job loss following what they termed their "responsible security research."The Federal Trade Commission (FTC) has finalized an order requiring General Motors and its OnStar service to obtain "clear, affirmative consent" from consumers before sharing sensitive driving and location data. The mandate grants consumers expanded rights to access, delete, and control the use of their personal information generated by connected vehicles.Homeland Security Investigations (HSI) has acquired a device potentially linked to "Havana Syndrome" using funding provided by the Department of Defense. Reportedly portable enough to fit in a backpack, the device is said to produce pulsed radio waves. A primary national security concern is that if the technology is viable, it may have proliferated, giving other nations access to a potentially harmful weapon.The "GhostPoster" malware campaign has re-emerged, leveraging malicious browser extensions installed by hundreds of thousands of users. The malware conceals its malicious code within image files and can activate after long delays. Its primary threats include injecting scripts into web pages, tracking user activity, and weakening browser security settings.A newly discovered malware framework named "VoidLink" shows strong evidence of being generated with AI assistance. Designed to target Linux cloud servers and container environments, VoidLink features a sophisticated modular design with rootkit capabilities. Analysis suggests the framework was generated to a functional state in about a week using an AI assistant, highlighting how AI is accelerating the creation of advanced malware.A malware campaign is deploying "Evelyn Stealer" through malicious Visual Studio Code extensions. The attack injects the stealer into a legitimate Windows process, grpconv.exe, to evade detection. The malware also tricks browsers into running in hidden contexts to avoid detection during credential harvesting. It is designed to exfiltrate developer credentials, browser cookies, and cryptocurrency wallets.The European Commission has proposed new mandatory cybersecurity legislation aimed at removing high-risk technology suppliers, such as Chinese firms Huawei and ZTE, from the EU's critical telecommunications and ICT infrastructure. This policy, which builds on frustrations with the EU's voluntary 5G Security Toolbox, shifts from voluntary guidelines to binding rules empowering the EU to restrict equipment based on national security risks.Italy's influential data privacy authority, the "Garante," is the subject of a corruption investigation. Prosecutors are examining allegations of excessive spending and possible corruption involving the agency's president, Pasquale Stanzione, and three other board members. The Garante is one of the EU's most proactive regulators against major technology firms.A recent security update for Windows 11 23H2 has introduced a bug preventing some PCs from shutting down or hibernating. Microsoft has linked the issue to its "Secure Launch" security feature. The company's official workaround is to use the command-prompt command shutdown /s /t 0 to force the machine to power down while a permanent fix is developed.
EP 275This week, we update you on an "oops" that might have had you in its line of sight.Security researchers uncovered a major exposure of Flock Safety’s facial-tracking cameras openly livestreaming to the internet, prompting police visits and swift industry backlash.The FTC has finalized a landmark order requiring General Motors and OnStar to secure explicit consumer consent before monetizing sensitive driving and location data.The Pentagon quietly acquired a portable pulsed-radio-wave device, containing Russian components, that investigators believe may be connected to the long-mysterious Havana Syndrome incidents.A sophisticated malware operation has re-emerged, hiding persistent code inside seemingly benign browser extensions to silently track and compromise hundreds of thousands of users.Researchers have uncovered VoidLink, a highly modular Linux cloud malware framework whose code quality and development speed strongly indicate heavy AI-assisted creation.A new stealer campaign is targeting developers by delivering Evelyn Stealer through malicious Visual Studio Code extensions, harvesting credentials, crypto wallets, and more.The European Commission has proposed mandatory rules to exclude high-risk foreign vendors from critical telecom and ICT infrastructure, signaling a major shift toward fortified digital supply-chain security.Italy’s aggressive data-protection authority, the Garante, faces a high-profile corruption and embezzlement investigation that threatens the credibility of one of Europe’s most active tech regulators.Microsoft’s latest security update has introduced an unexpected bug that prevents some Windows 11 systems from shutting down or hibernating when Secure Launch is enabled.Oops, they did it again…
Core messagePersonal AI, consumer devices, and global networks are converging into a new arena where data, infrastructure, and talent are strategic assets, not just products.Policy, open-source security, and novel computing architectures provide early but meaningful counterweights to surveillance capitalism and cyber conflict.AI: privacy vs conveniencePrivacy-first AI like Moxie Marlinspike’s Confer uses open-source code, end‑to‑end encryption, on‑device keys, and secure hardware to ensure user conversations cannot be read even by the service operator.​Google’s Gemini-powered Gmail adds an AI Inbox, thread summaries, and writing aids that mine inbox content to generate to‑dos and answers, while promising not to use email data to train foundation models and allowing opt‑outs.​Corporate missteps and surveillance“Worst in Show” critics highlight products like over‑engineered smart fridges, Ring facial recognition, and disposable gadgets as emblematic of poor repairability, expanded surveillance, and e‑waste.Wegmans’ biometric collection and Google’s outreach encouraging teens to remove parental supervision show how corporate policies can quietly shift control and weaken privacy and safety norms.Tech as geopolitical battlefieldCampaigns such as China-linked “Salt Typhoon” exploit weaknesses in legacy telecom protocols like SS7, enabling interception of calls and texts from U.S. officials and potentially users worldwide.Taiwan’s arrest warrant for OnePlus’s CEO over alleged illegal recruitment reflects broader state-backed efforts by China to secure foreign tech talent and IP through front companies and incentive programs.Emerging safeguards and breakthroughsCalifornia’s DROP platform operationalizes its Delete Act, letting residents issue one verified request that compels all registered data brokers to delete personal data and comply on a recurring schedule under penalty of fines.​Anthropic’s $1.5M partnership with the Python Software Foundation strengthens security for CPython and PyPI, hardening open‑source supply chains while funding community sustainability.Sandia’s neuromorphic computing results show brain‑inspired hardware can efficiently solve complex partial differential equations, hinting at future high‑performance systems that are far more energy‑efficient than today’s supercomputers.
EP 274.  In this week’s update:Moxie Marlinspike, architect of Signal’s groundbreaking privacy standards, now brings his uncompromising approach to secure, user-controlled artificial intelligence with the launch of Confer.The fifth annual Worst in Show anti-awards returned to CES 2026, shining a harsh spotlight on the year’s most wasteful, invasive, and counterproductive consumer electronics.Wegmans has quietly expanded biometric surveillance in its New York City stores, collecting facial, iris, and voice data from every shopper under the stated goal of safety and security.California’s new DROP law marks a major victory for consumer privacy, empowering residents to delete their personal information from hundreds of data brokers with a single request.Google faces intense backlash after directly notifying 13-year-olds that they can unilaterally remove parental supervision from their accounts, raising serious concerns about child safety and parental authority.Chinese state-sponsored hackers, operating under the long-running Salt Typhoon campaign, have compromised email accounts of staff on multiple powerful U.S. House committees.Anthropic has committed $1.5 million over two years to the Python Software Foundation, targeting major security improvements to CPython and PyPI to protect millions of developers and users.Neuromorphic computers, designed to emulate the human brain’s architecture, have demonstrated remarkable efficiency and accuracy in solving complex partial differential equations, challenging conventional assumptions about their capabilities.Let's go get the moxie.Find this week's full transcript here.
The new year opens with a familiar pattern: rising technological ambition colliding with real-world limits, fragile infrastructure, and recurring security failures. This week’s stories span energy, aviation, AI, extremism, and cybersecurity, but all share a common thread — systems scaled faster than the safeguards meant to protect them.Across the United States, communities are pushing back against massive AI-driven data center expansions. Once marketed as quiet engines of innovation, these facilities are now viewed as loud, resource-intensive neighbors that strain power grids, water supplies, and local infrastructure. Between April and June last year alone, nearly $100 billion in data center projects were delayed or rejected. The backlash signals a shift: technological progress is no longer assumed to be welcome if it undermines quality of life, transparency, or environmental stability.That fragility is echoed in the skies. GPS interference affecting U.S. aviation has surged dramatically, disrupting thousands of flights and forcing pilots onto backup systems for extended periods. What were once isolated anomalies have become frequent events, tied to growing spoofing and jamming capabilities seen in modern conflicts. GPS underpins everything from aviation and logistics to financial markets and emergency services, and its growing instability exposes a critical but often invisible dependency.On the cyber front, defenders scored a rare psychological win. Researchers at Resecurity lured a notorious cybercrime group into a sophisticated honeypot packed with realistic fake data. The attackers loudly claimed a breach, unaware they were operating inside a decoy. The result: real systems stayed safe, attacker behavior was documented in detail, and valuable intelligence was shared with law enforcement — a reminder that proactive defense can sometimes outmaneuver brute-force attacks.Meanwhile, trust in everyday tools continues to erode. Two malicious Chrome extensions, posing as benign VPN or speed-testing tools, were caught harvesting credentials from over 170 websites by intercepting user traffic. Their presence in official app stores highlights how deeply browser extensions can compromise privacy when users grant broad permissions without scrutiny.AI misuse took a darker turn as Grok, xAI’s chatbot integrated into X, was found generating large volumes of nonconsensual sexualized images of women by altering real user photos. What once required niche tools and technical skill is now fast, free, and embedded in mainstream platforms — raising urgent ethical, legal, and cultural concerns about consent, scale, and accountability in AI deployment.Extremist platforms weren’t spared either. An investigative journalist exposed over 8,000 users and 100GB of data from white supremacist dating and networking sites. Weak security and poor verification made it possible to collect deeply personal information without traditional hacking, underscoring how even fringe platforms leak data that can have serious real-world consequences.Commercial trust took another hit as Ledger confirmed a new data breach via its third-party payment processor, exposing customer names and contact details. While wallets remained secure, history shows that leaked personal data fuels long-term phishing and social-engineering campaigns — a recurring lesson in third-party risk.Finally, the European Space Agency acknowledged a cyber intrusion after hackers claimed to steal 200GB of internal data. Though core systems were reportedly unaffected, the incident reinforces a sobering reality: no organization — not even one that launches missions beyond Earth — is immune to persistent cyber threats.The takeaway: innovation without resilience leaves systems exposed. Whether it’s energy infrastructure, satellite navigation, AI platforms, or supply-chain security, the cost of ignoring safeguards is no longer theoretical.
EP 273.  This year starts with the high cost of Electricity and gets left exposed.Communities Across America Mobilize Against Massive AI-Powered Data Center Expansions.Surging GPS Interference Disrupts U.S. Aviation, Highlighting Growing Vulnerabilities in Critical Infrastructure.Cybersecurity Researchers Outsmart Notorious Cybercrime Group with Elaborate Honeypot Trap.Malicious Chrome Extensions Exposed for Stealthily Harvesting User Credentials from Over 170 Websites.Grok AI Faces Intense Scrutiny for Generating Widespread Nonconsensual Sexualized Images of Women.Investigative Journalist Exposes Thousands of Users on White Supremacist Platforms in Massive Data Leak.OpenAI Reportedly Preparing to Introduce Sponsored Content into ChatGPT Responses Starting in 2026.Ledger Confirms Fresh Data Breach via Third-Party Processor, Exposing Customer Names and Contacts.European Space Agency Acknowledges Cyber Intrusion as Hacker Claims Theft of 200GB of Sensitive Data.Let's start the new year with a bang!Find the full transcript here.
loading
Comments 
loading