DiscoverThe Automated Daily - Tech News Edition
The Automated Daily - Tech News Edition
Claim Ownership

The Automated Daily - Tech News Edition

Author: TrendTeller

Subscribed: 0Played: 0
Share

Description

Welcome to 'The Automated Daily - Tech News Edition', your ultimate source for a streamlined and insightful daily news experience.
107 Episodes
Reverse
Please support this podcast by checking out our sponsors: - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad - Discover the Future of AI Audio with ElevenLabs - https://try.elevenlabs.io/tad - Prezi: Create AI presentations fast - https://try.prezi.com/automated_daily Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: One-shot Alzheimer’s plaque cleanup - Washington University researchers used engineered astrocytes as “super cleaners” to remove amyloid beta in mice, suggesting a potential one-time Alzheimer’s therapy alternative to repeated monoclonal antibody infusions. AI MRI Alzheimer’s prediction - Worcester Polytechnic Institute reports an AI model reading MRI scans can predict Alzheimer’s with high accuracy, highlighting hippocampus volume loss and potential earlier detection for patients and clinicians. AI chips export approval rules - The Trump administration is considering rules requiring Commerce Department approval for overseas shipments of advanced AI chips, a move that could reshape global supply chains for Nvidia, AMD, and major buyers. Who governs AI: CEOs or law - Anthropic’s dispute with the U.S. Defense Department spotlights AI governance tensions, where corporate policies on surveillance and weapons may function like de facto regulation without democratic accountability. Social media design on trial - A Los Angeles case aims to treat algorithmic features like infinite scroll and autoplay as product design choices, challenging Section 230 boundaries and potentially forcing platform redesigns for teen safety. Critical minerals become security issue - The U.N. warns demand for lithium, cobalt, nickel and other critical minerals could surge by 2030 and 2040, pushing supply chains into the center of geopolitics, trade policy, and conflict-risk debates. Robot ground vehicles in Ukraine - Ukraine is expanding weaponized uncrewed ground vehicles as drones widen the battlefield kill zone, raising new questions about partial autonomy, operator control, and future robot-on-robot combat. DART nudges an asteroid’s orbit - New research confirms NASA’s DART impact not only altered an asteroid moonlet’s local orbit, but also measurably changed its path around the Sun—an important real-world datapoint for planetary defense. Flying taxi scale-up in China - AutoFlight’s large eVTOL prototype signals how China’s “low-altitude economy” could evolve from delivery drones toward passenger aircraft, though safety certification and infrastructure remain major hurdles. EV charging claims jump forward - BYD showcased next-generation battery and ultra-fast charging claims meant to reduce range anxiety and charging downtime, potentially pressuring the broader EV market if results hold up in everyday conditions. Episode Transcript One-shot Alzheimer’s plaque cleanup Let’s start with Alzheimer’s research, because we got two developments that rhyme in a useful way: one is about clearing the disease’s hallmark proteins, and the other is about spotting risk earlier. First, researchers at Washington University School of Medicine reported a striking result in Science: they re-engineered astrocytes—cells that normally support neurons—so they recognize and swallow amyloid beta, the protein that forms Alzheimer’s-related plaques. The twist is they borrowed a playbook from cancer therapy: a receptor design that helps immune cells “lock on” to a target. Here, the target is amyloid in the brain. In mouse models, a single injection given before plaques typically form prevented plaque buildup for months. And in older mice already loaded with plaques, that same one-time approach cut plaque levels by about half. The big reason this is turning heads is practicality: today’s anti-amyloid antibody treatments are typically a repeating commitment. A durable, one-and-done strategy—if it ever proves safe and effective in humans—could radically reduce treatment burden. The researchers are also careful to say this is early, and the safety and targeting questions are not optional homework. Still, it’s a notable new direction: instead of repeatedly sending in cleanup crews, you try to upgrade the brain’s own staff. On the detection side, researchers at Worcester Polytechnic Institute say they trained a machine-learning model to predict Alzheimer’s from MRI scans with very high accuracy, by picking up subtle shrinkage patterns across many brain regions. One standout finding: early volume loss in the right hippocampus showed up consistently, and the team also described differences between men and women in where the earliest changes appear. The headline here isn’t that AI “solves” Alzheimer’s—far from it—but that better early warning could buy people time: time to plan, to enroll in studies, and to use treatments when they’re most likely to help. AI MRI Alzheimer’s prediction Now to AI policy and power, where two stories point to the same pressure point: who actually gets to decide how advanced AI is used—and where it’s allowed to go. First, Bloomberg reports the Trump administration is weighing draft rules that would require U.S. government approval for shipments of advanced AI chips to basically anywhere outside the United States. If this becomes policy, it would expand oversight from targeted restrictions to something closer to continuous gatekeeping of global sales. Why it’s interesting is the second-order effect: approvals that are slower or unpredictable can push international buyers to redesign plans around non‑U.S. suppliers over time, even if American chips remain best-in-class. For the U.S., that’s a delicate trade: tighten controls to protect security interests, but risk shrinking influence over the very supply chains you’re trying to steer. In parallel, there’s a brewing argument about governance itself. A piece focused on Anthropic describes the company’s dispute with the U.S. Department of Defense as more than contract drama—framing it as a test of whether AI firms can effectively set policy boundaries that elected governments can’t easily override. Anthropic’s CEO has voiced concerns about domestic surveillance and autonomous weapons, and critics respond with a blunt question: if these decisions are made inside boardrooms, what accountability does the public actually have? This isn’t just about one company. Across the industry, stated “red lines” can shift when competition heats up or revenue opportunities expand. So the larger takeaway is that we’re still deciding whether the rules of AI use will come primarily from law and oversight—or from corporate principles that can be rewritten on short notice. AI chips export approval rules Staying with accountability, a major U.S. court case is testing a new way to hold social media platforms responsible—without focusing on what users posted. In Los Angeles, a trial is putting Meta and Google under the microscope with an argument that the harm comes from product design: the engagement loops, the endless feeds, the autoplay, the recommendation engines, and the nudges that keep people—especially kids—coming back. The plaintiff says these features helped drive compulsive use that worsened serious mental-health struggles. The legal significance is how the case tries to route around Section 230 protections. Instead of claiming the platforms are liable for third-party content, the claim is essentially: you built a product with known risk, and you didn’t do enough to prevent foreseeable harm. A judge allowed it to reach a jury, and it’s being treated as a bellwether for a much larger set of similar claims. If that approach holds up, it could change the incentives for product teams everywhere. The question would no longer be only “Is the content allowed?” but also “Is the interface itself safe enough, especially for minors?” Who governs AI: CEOs or law Next, the geopolitics of the modern gadget—and the modern military. At the U.N. Security Council, the U.N.’s political chief warned demand for critical minerals could surge dramatically over the next decade and beyond, as these materials underpin everything from phones and data centers to energy storage and weapons systems. The meeting cast mineral supply chains as a security issue, not just an economic one. This matters because we’re watching resource dependencies harden into strategy. The backdrop is U.S.-China competition and tighter trade constraints, with governments now talking about diversification and allied sourcing—while countries that actually mine these materials are pushing back, saying “secure supply” can’t mean ignoring governance, corruption, or conflict financing. So the story isn’t just about digging more stuff out of the ground. It’s about whether the next phase of the energy transition can be built without repeating old mistakes: exploitative extraction, fragile supply chains, and incentives that reward shortcuts. Social media design on trial On the battlefield, Ukraine’s war continues to preview what modern conflict could look like when robots get pulled down from the sky and onto the ground. Reports describe Ukraine rapidly expanding armed uncrewed ground vehicles—UGVs—that can carry weapons or explosives and operate in environments where it’s increasingly dangerous for soldiers to move. Commanders emphasize that many systems are still only partly autonomous: machines may help navigate or spot targets, but humans make the final call on firing. The why here is grimly practical. Aerial drones have widened the “kill zone,” making traditional movement and resupply far riskier. Combined with manpower strain, that creates pressure to push more tasks onto machines. Russia is also fielding combat UGVs, raising the possibility of robot-on-robot encounters—an escalation not in drama, but in trajectory. As autonomy improves, so does the urgency of the legal and ethical debate around lethal decisions and accountability when something goes wrong. Critical minerals
Please support this podcast by checking out our sponsors: - Build Any Form, Without Code with Fillout. 50% extra signup credits - https://try.fillout.com/the_automated_daily - Invest Like the Pros with StockMVP - https://www.stock-mvp.com/?via=ron - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: AI agent supply-chain hack - A “Clinejection” supply-chain incident showed how prompt-injection plus CI automation can trigger credential theft, npm compromise, and downstream malware installs for developers. Critical minerals become security - The U.N. warned critical minerals like lithium, cobalt, and nickel are turning into strategic assets, with supply chains now framed as a national security and governance issue amid U.S.–China rivalry. US tightens AI chip exports - Draft U.S. rules could require Commerce Department approval for most advanced AI chip exports, expanding licensing and giving Washington more leverage over global AI infrastructure and diversion risks. China’s AI-first economic blueprint - China’s new five-year plan pushes “AI+” across the economy, emphasizing productivity, aging demographics, open-source ecosystems, and breakthroughs in frontier tech amid export-control pressure. OpenAI and Anthropic coding race - OpenAI’s latest model update and Anthropic’s upcoming Claude Code permissions changes underscore accelerating competition in coding agents, productivity workflows, and safer automation in developer tools. Broadcom bets big on AI - Broadcom projected massive growth in AI chip revenue, signaling sustained demand for custom accelerators and the infrastructure buildout powering hyperscale AI. Android app stores open up - Epic and Google reached a settlement that could broaden alternative payments and app-store competition on Android, clearing the way for Fortnite’s return to Google Play globally. EV charging claims leap forward - BYD showcased new battery and ultra-fast charging claims that, if they hold up at scale, could reduce range anxiety and narrow the convenience gap with gasoline refueling. Commercial space stations timeline shift - A Senate-driven NASA bill would push faster contracting for private space stations while also extending the ISS timeline, aiming to prevent a gap in U.S.-led human presence in low Earth orbit. Biotech AI: brains, blood, genomes - New results spanned engineered “super cleaner” brain cells for Alzheimer’s plaques, an AI-driven blood test for early liver fibrosis, and an open genome-scale AI model for biology and variant interpretation. Microsoft hints at hybrid Xbox - Microsoft teased “Project Helix,” hinting at an Xbox future that may run a broader PC game library, blurring the line between console simplicity and Windows flexibility. Episode Transcript AI agent supply-chain hack We’ll start with that developer supply-chain story, because it’s a sharp reminder that “AI in the workflow” can turn small mistakes into big incidents. A campaign dubbed “Clinejection” reportedly led to thousands of developers installing an extra, unwanted AI agent after a popular tool’s distribution pipeline was compromised. The twist: the attackers didn’t just exploit code—they exploited process. A prompt-injection payload in a GitHub issue title was fed into an automated AI triage flow, which then ran attacker-influenced commands. That chain eventually helped leak publishing credentials and push a tainted package into the ecosystem. The headline here isn’t one tool getting hit—it’s that natural-language inputs are now part of the attack surface when AI agents have access to CI systems, caches, and release tokens. Critical minerals become security Staying in the AI-and-security lane, Washington is reportedly weighing draft rules that would put the U.S. government in the loop for nearly every overseas shipment of advanced AI accelerator chips. The idea, as described, is a “secure exports” model where reviews scale with the size and sensitivity of the sale, and the biggest deployments could even pull in host governments. If this becomes policy, it’s a major expansion from the country-based controls we’ve gotten used to. The strategic logic is clear: keep visibility on where cutting-edge compute ends up, slow down diversion, and limit China’s ability to access AI capacity indirectly. The risk is also clear: if approvals become slow or unpredictable, global buyers may start designing around U.S. suppliers—reducing American influence in the very supply chain these rules aim to protect. US tightens AI chip exports That export-control pressure is part of a larger U.S.–China technology standoff that keeps widening. China, for its part, just rolled out a new five-year policy blueprint alongside the opening of the National People’s Congress, and it reads like a statement of intent: AI woven into the broader economy, plus a push for breakthroughs in frontier areas like quantum and robotics. Officials are framing it as a productivity play—especially as demographic pressures mount—but there’s an unmistakable strategic angle too: reduce reliance on U.S. technology while building domestic capacity, including large-scale computing infrastructure and support for open-source communities. In other words, this isn’t just an “AI plan.” It’s an industrial plan where AI is the connective tissue. China’s AI-first economic blueprint And the scramble for strategic inputs isn’t limited to chips. At the U.N. Security Council, the organization’s political chief warned that demand for critical minerals could surge dramatically over the next decade and beyond. Minerals used in everything from consumer electronics to defense systems are being treated less like commodities and more like geopolitical assets. The U.N. also spotlighted the uncomfortable reality behind supply security: if sourcing accelerates without strong governance, it can amplify conflict and corruption in resource-rich regions. The takeaway is that “secure supply chains” now includes not just who you buy from, but whether extraction and trade are stable—and ethically defensible—over time. OpenAI and Anthropic coding race On the corporate side of the AI buildout, Broadcom is making one of the boldest calls yet. The company told investors it expects next year’s AI chip revenue to land significantly above the hundred-billion-dollar mark. That’s a striking signal of how quickly custom AI silicon and the surrounding infrastructure are scaling, especially among the largest tech players who want alternatives to one-size-fits-all hardware. Investors clearly liked what they heard. For everyone else, it’s another indicator that the AI boom is not just about flashy models—it’s about industrial capacity and long-term capex. Broadcom bets big on AI Speaking of models, OpenAI’s latest update is being framed as a step forward for both coding and office-style workflows—less about novelty, more about practical output. Commentary around the release suggests improved performance for code generation and for spreadsheet-heavy tasks that resemble everyday business analysis. The meta-story is the same one we’ve been watching: model providers are competing to own the “work layer,” not just the chatbot. If your model can draft, compute, summarize, and ship usable artifacts, it becomes harder for downstream tools to stay differentiated. Android app stores open up Anthropic, meanwhile, is preparing a research preview in Claude Code that reduces the constant permission pop-ups by allowing a more automatic mode—with added guardrails. It’s an attempt to thread the needle between productivity and safety: fewer interruptions, but without normalizing the kind of fully unrestrained execution that security teams hate. Coming right after stories like Clinejection, it’s hard not to see the timing as part of a broader shift: coding agents are moving from “cool demo” to “enterprise headache,” and governance features are quickly becoming product features. EV charging claims leap forward A related theme showed up in recent writing from developers and analysts: as AI coding tools speed up rewrites and migrations, the winners won’t just be the teams with the best prompts. They’ll be the ones with strong test suites, clear interfaces, and constraints that make it easy to verify what the agent produced. In plain terms, AI can generate a lot of code; your real advantage is being able to tell quickly whether it’s correct—and to guide it back on track when it isn’t. Commercial space stations timeline shift Shifting from developer ecosystems to consumer platforms, Epic says it’s settling its antitrust fight with Google after policy changes that Epic argues will make Android meaningfully more open worldwide. The practical outcome is simple and headline-friendly: Fortnite is expected back on Google Play globally within weeks. The more important detail is structural: if alternative payments and rival app stores become easier for normal users to access, Android’s app economy could tilt toward real distribution competition—something developers have argued for years, but rarely experienced at scale. Biotech AI: brains, blood, genomes In transportation tech, BYD used a Shenzhen event to spotlight new battery and charging claims that aim at the two pain points people still cite about EVs: range and time spent charging. The company is talking about very long-range targets and charging sessions that look more like a short pit stop than a long break. As always, the caveat is that stage demos and real-world rollouts are different beasts—charging speed depends on infrastructure, conditions, and consistency over time. But if the broader industry can deliver fast charging reliably, that’s one of the clearest ways to expand EV adoption beyond early adopters and city driving. Microsoft hints at hybrid Xbox Up in orbit, a NASA authorization bill advanced in the Senate that would pu
Please support this podcast by checking out our sponsors: - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad - Prezi: Create AI presentations fast - https://try.prezi.com/automated_daily - Consensus: AI for Research. Get a free month - https://get.consensus.app/automated_daily Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Android opens up Play Store - Google is cutting Play Store fees and loosening rules around alternative billing and third‑party app stores, signaling a major shift in Android monetization under regulatory pressure. AI agents reshape developer tools - A wave of “agent-first” tooling ideas is emerging: machine-readable CLIs, runtime schema introspection, and cleaner documentation formats like WordPress.org Markdown for reliable AI automation. Pentagon pressure on AI labs - Vox reports the Pentagon blacklisted Anthropic after it refused surveillance and autonomous-weapon terms, while OpenAI moved onto classified networks—raising governance and contract-language debates. Chip supply risks and demand - South Korea warned Middle East tensions could disrupt helium and other chipmaking inputs, as AI-driven demand stays intense; Broadcom also projected massive growth in custom AI silicon revenue. Evo 2 genome foundation model - Evo 2, an open-source genome language model trained on a massive multi-species DNA dataset, aims to improve variant interpretation and genome annotation, with explicit biosafety trade-offs. EU weighs social media age ban - The European Commission convened an expert panel to consider an EU-wide minimum age for social media, taking cues from Australia and escalating platform compliance pressure. Nuclear and sensor tech milestones - TerraPower won a key US NRC construction permit for its Natrium reactor, while Duke engineers set a speed record for a new ultrathin light sensor—both notable for future energy and imaging. AI culture meets deepfakes - Two new documentaries spotlight AI’s cultural tension: optimism versus risk, and how deepfakes can impersonate public figures—fueling ongoing debates about consent and misinformation. Episode Transcript Android opens up Play Store First up: Android is getting a big policy reset. Google says it’s ending the old default of taking a third of Play Store transactions. The new approach lowers the standard cut on in‑app purchases, gives some developers a path to an even smaller share, and takes a lighter bite out of subscriptions. The bigger story isn’t just the percentage. Google is also loosening rules that previously boxed developers into Google’s billing. Apps will be allowed to offer alternative billing options inside the app, or steer users to complete purchases on the web. That’s a clear contrast with Apple’s more limited openings, and it’s another sign that regulators and courts are now shaping app economics as much as product teams are. Alongside that, Google is building a “Registered App Stores” program. Third‑party stores that meet safety and quality requirements should get a smoother install flow, even as basic sideloading remains possible — though Google is hinting it may put more friction on sideloading later in 2026. The rollout starts in the EEA, the UK, and the US by the end of June, then expands over time. And yes, Epic is already lining up for the moment: it says Fortnite will return broadly to Google’s store as these policies land. AI agents reshape developer tools Staying with the theme of power shifting toward users and developers, there’s a parallel conversation happening in AI tooling: command-line software built for humans is starting to look clumsy for AI agents. One developer, Justin Poehnelt, argues that “agent DX” is basically a different design target. Humans like forgiving interfaces and helpful hints. Agents need predictable behavior, clean machine-readable input and output, and security measures that assume the input might be dangerous — even when it’s coming from your own automation. His practical advice is to stop forcing agents through overly simplified flags and instead allow raw JSON payloads straight to APIs, so nothing gets lost in translation. He also points to something that sounds mundane but is crucial: tools that can describe themselves at runtime, so agents don’t rely on stale documentation shoved into prompts. And he emphasizes safety rails like dry runs and output sanitization, because agent workflows can turn small mistakes into fast, repeated mistakes. Pentagon pressure on AI labs That “agent-first” idea isn’t just theory. Google’s Workspace developer community has released an open-source CLI called gws that aims to be a single gateway to common Workspace APIs. What’s notable is that it doesn’t hardcode a fixed set of commands; it can discover capabilities dynamically, and it’s built to return structured data rather than pretty terminal output. It also includes a mode designed to plug into agent ecosystems, so an AI assistant can call Workspace actions like tools. The catch: it’s explicitly not an official Google product, and it’s under active development — two details that matter a lot if you’re considering it for anything mission-critical. Chip supply risks and demand In the broader “make machines better readers” department, WordPress.org has added a clean Markdown output for most pages. You can request Markdown directly, and pages can advertise that alternative format. This is partly about AI — making official documentation easier for models and agents to ingest, so they’re less likely to learn from outdated blog posts or scraped copies. But it’s also just a quality-of-life upgrade for developers who want docs in terminals, editors, or automated pipelines. Evo 2 genome foundation model Now for a cautionary tale on how AI collides with open-source licensing. A dispute has flared around the Python library chardet. Maintainers released a new version after using an AI coding tool to rewrite the codebase, then switched the license to MIT, framing it as a complete rewrite. The original author argues that may not be a clean break from the past, because exposure to the earlier code — by humans or by the AI process — can still make the result a derivative work under copyleft rules. It’s a messy, emerging question: if AI-assisted “rewrites” become a common path to relicensing, that could weaken copyleft protections, and it could also leave companies unsure what they’re actually allowed to ship. EU weighs social media age ban Let’s zoom out to geopolitics and AI policy, where the temperature is rising. Vox reports the Pentagon blacklisted Anthropic after the company refused to relax two “red lines”: no mass domestic surveillance and no fully autonomous weapons. The piece describes the move as a form of pressure on a private AI vendor — and it landed at the same time OpenAI announced work to deploy models on the Pentagon’s classified network. Even if you treat this as normal government procurement drama, it raises a very current issue: contract wording. Terms like “lawful purposes” can sound reassuring, but critics argue they don’t necessarily protect against large-scale surveillance enabled by modern data markets. The story also notes a growing worker backlash, with calls for solidarity across AI companies. Nuclear and sensor tech milestones China, meanwhile, is making its own intentions explicit. A new five-year policy blueprint unveiled around the National People’s Congress puts artificial intelligence all over the page, pairing an “AI+” push with goals in areas like quantum computing and robotics. Officials are framing this as a productivity strategy for an aging population, and also as a resilience strategy as export controls and tech rivalry deepen. The plan also nods to hyperscale computing buildouts and support for open-source AI communities — an approach Beijing seems to see as both an accelerator and a differentiator. AI culture meets deepfakes All of this runs on hardware, and hardware runs on supply chains. South Korea is warning that an escalating US–Israel conflict with Iran could disrupt key materials used in semiconductor manufacturing — with helium singled out as a particularly sensitive input. Even if companies have short-term inventories, the warning is about how quickly a geopolitical shock can ripple into production planning. At the same time, demand signals are still flashing bright green. Broadcom reported strong results and, more strikingly, its CEO said he expects next year’s AI chip revenue to blow past a huge milestone. The significance here is that the AI boom isn’t just about buying more standard GPUs; it’s increasingly about custom silicon and the industrial capacity to deliver it consistently. Story 9 In research news, one of the most ambitious open-source biology projects in a while just got louder. Researchers released Evo 2, a “genome language model” trained on an enormous dataset spanning bacteria, archaea, and eukaryotes. What makes this interesting is the direction of travel: genome modeling is moving beyond small, simpler organisms toward the long-range complexity of eukaryotic DNA. Early results suggest the model can score the impact of mutations in biologically meaningful ways, including tricky areas like splicing and noncoding regions, and the team has published the model and the dataset with some biosafety-minded exclusions. The near-term value is interpretation — faster annotation and better variant triage. Generation and design are still harder, and the researchers basically admit that: reading biology appears to be ahead of writing it, at least for complex organisms. Story 10 Two more quick hits before we wrap. The European Commission is convening an expert group to explore whether the EU should set a bloc-wide minimum age for social media. Recommendations are expected by summer 2026, and Australia’s under‑16 rule is clearly the reference point. If Brussels goes this route, i
Please support this podcast by checking out our sponsors: - Prezi: Create AI presentations fast - https://try.prezi.com/automated_daily - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad - Consensus: AI for Research. Get a free month - https://get.consensus.app/automated_daily Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: AI wargames and nuclear escalation - A new wargame study claims frontier AI models escalated to nuclear use in most simulated crises, raising urgent AI safety and defense policy questions. OpenAI military safeguards backlash - OpenAI says it will tighten limits on classified government use after criticism, with explicit language against domestic surveillance and added contract guardrails. OpenAI eyes a GitHub rival - Reports say OpenAI is building a code-hosting platform after GitHub disruptions, signaling a strategic move in developer infrastructure and potential Microsoft tension. Chrome accelerates releases amid AI - Google will shift Chrome to a faster release cadence, a notable response as AI-first browsers and agentic automation put pressure on the traditional browser market. New quantum decryption claims emerge - A newly proposed JVG quantum decryption algorithm claims to cut the resources needed to break RSA and ECC, intensifying post-quantum cryptography planning and crypto-agility. Apple M5 goes chiplet-based - Apple’s M5 Pro and M5 Max move further toward modular, multi-die design, a signal that Apple Silicon scaling and future ‘Ultra’ strategies may be changing. Starlink aims for phone broadband - SpaceX says Starlink is growing rapidly and is aiming for direct-to-device service that evolves from basic messaging into more mainstream mobile broadband coverage. China’s new tech-industrial blueprint - China’s ‘Two Sessions’ are expected to spotlight a tech-driven growth plan spanning chips, robotics, quantum, 6G, and embodied AI—reshaping global competition and supply chains. Neuron biocomputer plays Doom - Cortical Labs showed lab-grown neurons interfacing with silicon to learn a basic version of Doom, highlighting the frontier of hybrid biocomputing and adaptive learning. Drone boats threaten key shipping lanes - An attack using an uncrewed explosive drone boat in the Gulf of Oman underscores how low-cost autonomous systems can disrupt global energy shipping chokepoints. Moon helium-3 mining partnership - Astrolab and Interlune are teaming up on lunar surface equipment to prospect and potentially extract helium-3, reflecting rising commercial interest in Moon infrastructure. Episode Transcript AI wargames and nuclear escalation We’ll start with AI and national security, because it’s been a busy—and uncomfortable—news cycle. Researchers at King’s College London ran simulated geopolitical crisis wargames and report that top AI models from major labs chose nuclear escalation in most of the scenarios. The authors argue the models don’t share a human “nuclear taboo,” and instead treat nukes as just another tool on the menu—especially under time pressure. The paper isn’t peer reviewed, and real-world command and control looks nothing like a lab simulation, but it’s a sharp reminder of the governance problem: even if nobody plans to hand an AI the keys, these systems are already being pulled into analysis, planning, and decision support. OpenAI military safeguards backlash That connects to another OpenAI headline: the company says it will amend a U.S. government agreement tied to classified military operations after criticism that the deal looked vague and overly permissive. OpenAI’s Sam Altman says the updated language will include explicit limits aimed at preventing intentional domestic surveillance of U.S. persons, and it will require additional modifications before certain intelligence agencies can use the system. The interesting part here is less the legal wording and more the market signal: reports say the backlash sparked a spike in consumer app uninstalls, while rival apps gained ground in rankings. It’s a rare, visible example of public sentiment quickly translating into product behavior—and a warning that “trust” is becoming a competitive feature, not just a compliance checkbox. OpenAI eyes a GitHub rival Sticking with OpenAI for a moment: multiple outlets report the company is exploring a code-hosting platform that could compete with GitHub. The motivation is very practical—GitHub outages reportedly disrupted OpenAI’s own engineering work—so the company is looking at owning more of its development pipeline. If this takes shape, it’s notable for two reasons. First, it shows how essential code hosting has become for AI-heavy organizations where downtime is expensive. Second, it would place OpenAI in more direct competition with Microsoft-owned infrastructure, which adds a layer of intrigue given Microsoft’s deep investment in OpenAI. Chrome accelerates releases amid AI On the product side of AI, OpenAI also rolled out GPT-5.3 Instant, positioning it as more direct and less burdened by constant disclaimers. The company is essentially trying to thread a needle: keep safety boundaries, but reduce the “over-cautious assistant” behavior that frustrates everyday users. This is part of a broader trend: the leading labs are now tuning for feel—tone, helpfulness, and social friction—because those factors increasingly decide whether a tool becomes habitual or gets abandoned. New quantum decryption claims emerge Meanwhile, Google is speeding up Chrome’s release cadence starting later this year, moving to a faster rhythm for stable updates. Officially, it’s about keeping pace with how quickly the web platform evolves and delivering improvements to users and developers sooner. Unofficially, the timing makes sense. AI-first browsers from newer players are trying to redefine what a browser does—less “tabs and bookmarks,” more “agents that do tasks.” Chrome doesn’t need to panic, but it does need to move quickly if browsing becomes more automated and more competitive than it’s been in a decade. Apple M5 goes chiplet-based Now for security—and a claim that’s getting a lot of attention. SecurityWeek highlighted a newly announced quantum decryption approach called the “JVG” algorithm. Its proponents argue it could make breaking common public-key cryptography far more feasible than previously expected, potentially needing dramatically fewer quantum resources than Shor’s algorithm. Right now, it’s a claim, not a consensus. It hasn’t been broadly validated, and crypto history is full of big promises that didn’t survive scrutiny. But it still matters because it adds pressure to a trend that’s already overdue: moving to post-quantum cryptography and building “crypto-agility,” so organizations can swap algorithms without rebuilding everything from scratch. Starlink aims for phone broadband Apple also made waves with updates to the MacBook Pro lineup, centered on the new M5 Pro and M5 Max. What’s interesting isn’t just faster performance—it’s the direction. Apple is leaning further into a modular, multi-die approach, which signals a more flexible way to scale up chips across product tiers. This also raises a question for the roadmap watchers: if the higher-end chips are already composed of multiple pieces, what does that mean for the next top-of-the-stack designs that used to be built by effectively doubling up? Apple didn’t answer that directly, but the architecture hints at a longer-term reshuffle in how its most powerful Macs get made. Apple also refreshed its external displays, including a higher-end Studio Display option meant to fill the gap left by the discontinued Pro Display XDR. The takeaway is clear: Apple wants the pro Mac “stack”—laptops, silicon, and displays—to feel like a coherent ecosystem again. China’s new tech-industrial blueprint Let’s head to space and connectivity. At Mobile World Congress 2026, SpaceX executives said Starlink expects to surpass 25 million active users by the end of 2026. More eye-catching: the company says its direct-to-cell service has already crossed 10 million subscribers, and it’s aiming for a next-generation system that goes beyond emergency texting toward something closer to mainstream mobile data—without requiring modified phones. If Starlink can deliver even a slice of that vision reliably, it changes the conversation for carriers and governments. Satellite becomes less of a niche backup and more of a coverage layer—useful for rural gaps, disaster response, and network congestion when terrestrial infrastructure is stressed. Neuron biocomputer plays Doom China’s big annual political meetings—the “Two Sessions”—are underway, with attention on the next five-year blueprint for the economy and industry. The message expected from Beijing is a shift from building domestic tech capability to deploying it at scale: more advanced manufacturing, more automation, and more focus on strategic sectors like chips, robotics, quantum, and next-generation wireless. This matters globally because China isn’t just trying to be self-sufficient—it’s trying to export the full package: hardware, infrastructure, and increasingly, AI-driven systems. That can reshape supply chains, pricing pressure in global markets, and geopolitical debates about surveillance and standards. And as a small preview of that manufacturing push, Xiaomi says humanoid robots have begun trial operations in its car factory. It’s early testing, but it’s another sign that “humanoid robotics” is moving from flashy demos toward repetitive industrial tasks where reliability and cost matter more than charisma. Drone boats threaten key shipping lanes Two research stories caught my eye today—both at the edge of what we think computers are. First, an Australian startup, Cortical Labs, demonstrated a hybrid “biocomputer” using lab-grown human neurons interfaced with a chip, and showed it learning a basic version of the classic shooter
Please support this podcast by checking out our sponsors: - Invest Like the Pros with StockMVP - https://www.stock-mvp.com/?via=ron - Build Any Form, Without Code with Fillout. 50% extra signup credits - https://try.fillout.com/the_automated_daily - Effortless AI design for presentations, websites, and more with Gamma - https://try.gamma.app/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Neurons learn to play Doom - Cortical Labs showed a hybrid biocomputer using lab-grown human neurons that can learn basic Doom gameplay, hinting at future neuron-chip computing and adaptive control. Australia cracks down on AI - Australia’s eSafety regulator is pushing strict under-18 protections for AI services, with age assurance and filtering demands that could also pressure app stores and search ‘gatekeepers.’ OpenAI, Anthropic, and Pentagon power - OpenAI is revising its classified-government AI deal to limit domestic surveillance, while the U.S. moves to sideline Anthropic as a ‘supply-chain risk,’ spotlighting AI ethics vs state power. OpenAI mega-funding and cloud shift - OpenAI’s enormous new funding round, backed by Amazon and Nvidia, signals escalating AI infrastructure spending and a tighter link between frontier models and cloud capacity. AWS outages hit by drone strikes - AWS reported drone-related physical damage to Gulf-region data centers, a reminder that geopolitics can directly disrupt cloud reliability for EC2, S3, and more. Artemis reshuffle and Starship plans - NASA delayed the next crewed Moon landing to Artemis IV in 2028, while SpaceX is holding off on risky Starship tower catches until it nails repeatable ocean landings. Stem-cell patch for spina bifida - UC Davis Health reported early Phase 1 safety results for fetal spina bifida surgery enhanced with a placenta-derived stem-cell patch, aiming for better long-term outcomes. AI chatbots for health questions - Clinicians say AI health chatbots can help interpret records and prep for visits, but warn about real-world error risk and privacy gaps because most AI tools aren’t covered by HIPAA. New clue in Hubble tension - A new ‘stochastic siren’ idea uses the gravitational-wave background as an independent cross-check on the Hubble constant, potentially clarifying why cosmic expansion estimates disagree. Subscription economy meets its limits - A new essay argues subscriptions spread because Wall Street rewarded recurring revenue, but churn is rising as consumers audit spending, content feels exhausted, and AI lowers costs. Open-weight LLMs converge on efficiency - Analysis of recent open-weight models says the race is increasingly about efficiency and long-context practicality, with post-training and deployment constraints becoming major differentiators. Laser air defense enters combat - Israel confirmed the first operational combat use of its Iron Beam laser defense system, raising fresh questions about cost-per-intercept, scaling, and limits like weather performance. Episode Transcript Neurons learn to play Doom First up, one of the strangest demos you’ll hear this week: researchers at Australia’s Cortical Labs say their neuron-powered “biocomputer” can learn to play the classic shooter Doom. The performance isn’t impressive in gamer terms—it still loses plenty—but that’s not the point. The headline is that living neurons, wired into a chip, can adapt in real time to a changing task. If this line of work progresses, it could eventually influence how we think about training systems for control problems, like robotics, where quick adaptation matters more than perfect accuracy. Australia cracks down on AI Staying with AI, Australia is about to make life uncomfortable for a lot of chatbot and AI app makers. The country’s internet safety regulator says that from March 9, services operating in Australia must stop minors from accessing pornography, extreme violence, self-harm, and eating-disorder content. And the regulator isn’t only talking to the chatbot companies—it’s signaling it may also lean on “gatekeepers” like app stores and search engines to cut off access to non-compliant tools. Reuters found many popular AI products haven’t clearly shown age-check systems or robust filtering plans, which makes this a major test of whether AI platforms can meet real-world safety rules at scale, not just publish policies. OpenAI, Anthropic, and Pentagon power In the U.S., the fight over AI and national security is getting sharper—and messier. OpenAI says it will revise its recent agreement for classified government work after criticism that the deal looked rushed and too open-ended. The company says it will add explicit limits aimed at preventing intentional domestic surveillance of U.S. persons, and it’ll require additional contract changes before certain intelligence uses can happen. The backlash is already showing up in the market, with reports of user churn in consumer apps and rivals benefiting in the rankings. At the same time, a separate dispute with Anthropic is turning into a power struggle: the U.S. government is reportedly ending federal use of Anthropic’s models and pushing to label the company a supply-chain risk. Anthropic has said it won’t relax safeguards around mass surveillance and fully autonomous weapons. The bigger story here isn’t just one contract—it’s the precedent. If frontier AI is treated like critical infrastructure, governments may demand compliance as a baseline, while companies try to draw red lines that look, to officials, like private policy-making. OpenAI mega-funding and cloud shift And money is pouring onto that same chessboard. OpenAI is reportedly raising an enormous new funding round that would put it in a different league even by big-tech standards. Amazon, Nvidia, and SoftBank are among the names attached, with the pitch centered on one thing: capacity. More users, more enterprise deployments, more compute, and more pressure to lock in supply. What’s notable is how the partnerships are being carved up—one cloud for some parts of OpenAI’s world, another cloud for others—suggesting the future of “AI platforms” may be as much about infrastructure deal-making as model breakthroughs. AWS outages hit by drone strikes While we’re on infrastructure, Amazon Web Services had a harsh reminder that the cloud is physical. AWS says two data centers in the United Arab Emirates were struck by drones, and a Bahrain facility was taken offline after nearby damage, causing service errors and degraded availability in the region. It’s an unusually direct example of a geopolitical event translating into outages for everyday cloud building blocks—compute, storage, databases—the stuff that businesses assume will always be there. The takeaway isn’t that cloud is fragile everywhere, but that regional dependencies can become business risks overnight when conflict gets close to critical facilities. Artemis reshuffle and Starship plans Let’s shift to space. NASA has reshuffled its Artemis Moon timeline again. The agency now says the first crewed lunar landing will move to Artemis IV, targeted for 2028. Artemis III, once pegged as the landing mission, is being reframed as more of a systems test in low Earth orbit—practicing the kinds of operations needed for lunar missions without actually going to the Moon. NASA is also talking about increasing launch cadence, which is an implicit admission that “one giant mission every few years” is a recipe for delays, budget stress, and skills atrophy. The change also raises new questions for international partners because some previously central pieces—like the Lunar Gateway—weren’t clearly emphasized in the updated plan. Stem-cell patch for spina bifida On the commercial side, Elon Musk says SpaceX won’t try the dramatic “tower catch” of Starship’s upper stage until it can deliver two perfect soft landings in the ocean. That’s a risk-management message: prove the vehicle can reliably come back intact before you attempt to catch it near expensive ground infrastructure. SpaceX is still aiming for a Starship V3 flight in March 2026, and the strategic significance is straightforward—if full, routine reuse works, the economics and cadence of heavy lift change fast. But the company is signaling it’s not going to gamble on spectacle if it raises the odds of a hard failure over land. AI chatbots for health questions Now to medicine, where one small early trial delivered a meaningful milestone. UC Davis Health researchers reported Phase 1 results combining standard fetal surgery for spina bifida with an added patch made from living, placenta-derived stem cells placed over the exposed spinal cord. In the first six pregnancies treated, they reported no safety issues tied to the stem cells, and after birth, imaging showed encouraging changes that often correlate with better outcomes. It’s early—this is still about safety, not definitive benefit—but the FDA and an independent monitoring board allowing the study to continue is a key step for a condition where even today’s best surgical options can still leave kids with serious long-term challenges. New clue in Hubble tension AI is also showing up in health in a very different way: more people are leaning on chatbots for medical questions, and companies are leaning in with health-focused versions. Doctors and researchers say these tools can help translate lab results, summarize records, and help patients ask better questions at appointments. But they’re also blunt about the limits: if symptoms look urgent—things like chest pain or severe shortness of breath—don’t troubleshoot with a chatbot. Another big red flag is privacy. Much of what you share with an AI service isn’t protected the way it would be inside many healthcare systems, which means convenience can quietly turn into long-term data exposure. Subscription economy meets its limits In fundamental science, a new idea could add a fresh angle to one of cosmo
Please support this podcast by checking out our sponsors: - Discover the Future of AI Audio with ElevenLabs - https://try.elevenlabs.io/tad - Build Any Form, Without Code with Fillout. 50% extra signup credits - https://try.fillout.com/the_automated_daily - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Claude dethrones ChatGPT in US - Anthropic’s Claude surged to the #1 free app in the U.S. App Store, overtaking ChatGPT as consumers react to AI ethics and defense headlines. Pentagon deals split AI vendors - Pentagon negotiations with Anthropic reportedly collapsed over language on lawful surveillance, while OpenAI moved ahead with a classified-network deal that reiterates bans on mass surveillance and autonomous lethal weapons. OpenAI raises $110B mega-round - OpenAI is pursuing a massive $110B funding round at an $840B fully diluted valuation, with Amazon, Nvidia, and SoftBank committing tens of billions to scale compute and products. Coding enters the agent era - Cursor’s Michael Truell says AI-assisted development is entering a third era: autonomous cloud agents producing reviewable artifacts, with ~35% of internal merged PRs created by agents. Build APIs for AI clients - API builders are being urged to design “AI-first” interfaces: programmatic docs endpoints like /api/help, non-destructive writes with candidate review flows, and strict scrutiny of risky fallback code. Cloudflare launches Agents SDK platform - Cloudflare introduced Cloudflare Agents (npm i agents), pitching a full stack for agentic apps on Workers, Durable Objects, Workflows, and AI Gateway with cost controls like CPU-time billing and WebSocket hibernation. Vietnam enacts comprehensive AI law - Vietnam’s AI law took effect March 1, requiring labeling of AI-generated content, disclosure when users interact with AI, and human oversight—echoing EU AI Act-style risk controls. Australia threatens AI age-gate blocks - Australia’s eSafety regulator signaled it may target app stores and search engines as “gatekeepers” to block AI services that don’t implement age assurance ahead of March 9 restrictions. AI infrastructure boom strains power - A broader ‘capex crunch’ is accelerating: hyperscalers and AI labs are pouring hundreds of billions into data centers, GPUs, and power, raising grid, construction, and environmental concerns. Google bets on iron-air batteries - Google announced a Minnesota data center tied to 1.9GW of renewables and a 30GWh long-duration Form Energy iron-air battery system, aiming to ride through multi-day renewable lulls. Nvidia invests in silicon photonics - Nvidia will invest $4B split between Lumentum and Coherent to secure optical networking and laser component capacity, targeting ‘gigawatt-scale AI factories’ enabled by silicon photonics. Lasers, drones, and future warfare - Israel says it used its Iron Beam laser air defense operationally for the first time, while the U.S. reported first combat use of one-way attack drones—signs that directed energy and cheap loitering munitions are reshaping air defense. Humanoid home robots still distant - Robotics researchers warn general-purpose humanoid home robots aren’t close in 2026, citing fragile hardware, messy home environments, and—most of all—training-data scarcity compared with self-driving cars. SpaceX weighs a confidential IPO - SpaceX is reportedly considering a confidential IPO filing as soon as March, potentially aiming for a June listing that could become the largest IPO ever by funds raised and valuation. Nvidia-led push for AI-native 6G - At MWC, Nvidia and major telecom partners backed open, secure, AI-native 6G platforms, positioning AI-RAN and software-defined networks as the backbone for ‘physical AI’ at scale. Episode Transcript Claude dethrones ChatGPT in US Let’s start with the consumer-facing ripple effect. Anthropic’s Claude has climbed to the top spot for free apps in Apple’s U.S. App Store, pushing ChatGPT to number two. Reporting ties the surge to backlash after Sam Altman publicly discussed OpenAI working with the U.S. Department of Defense on deployments inside classified networks. Anthropic’s CEO, Dario Amodei, has been vocal about drawing hard lines—specifically against mass domestic surveillance and fully autonomous weapons. Whether you agree with Anthropic or not, the striking part is that everyday users appear to be voting with downloads. Anthropic says free users are up sharply since January, with daily signups setting records, and paid subscribers more than doubling this year. Pentagon deals split AI vendors Underneath that popularity swing is a much bigger policy and procurement story. Talks between the Pentagon and Anthropic reportedly came down to last-minute contract language, especially around what “lawful surveillance” could mean in practice. Negotiations then collapsed, and Defense Secretary Pete Hegseth publicly labeled Anthropic a security risk—an extraordinary move for a major U.S. tech company. Within hours, OpenAI said it reached a deal to supply AI to classified military networks, and Altman emphasized that OpenAI’s contract still prohibits mass surveillance and autonomous lethal weapons—calling them core safety principles that the Pentagon accepted. One detail worth watching: reports also describe internal industry blowback, with employees across AI companies urging leaders not to be played against each other by shifting government demands. If this becomes the new normal—public pressure campaigns plus contract brinkmanship—it could reshape how AI firms write policies, and how they prove compliance. OpenAI raises $110B mega-round Now to the money fueling all of it. OpenAI is also raising a new funding round targeting $110 billion, valuing the company at roughly $730 billion pre-money and about $840 billion fully diluted. The headline investors include Amazon, Nvidia, and SoftBank. Amazon alone is slated to put in up to $50 billion, and OpenAI says it will use two gigawatts of compute capacity powered by Amazon’s Trainium chips. There’s also an important structural point: AWS becomes the exclusive third-party cloud provider for OpenAI Frontier—its enterprise platform for building and managing AI agents—while Microsoft remains the exclusive cloud provider for OpenAI APIs and continues hosting first-party products on Azure. In other words, OpenAI is slicing its cloud relationships by product line, not picking one winner for everything. Coding enters the agent era This all feeds into what developers are actually doing day to day—because the development workflow is changing fast. Cursor’s Michael Truell argues we’re entering a “third era” of AI-assisted software building. First came autocomplete that excelled at repetitive code. Then came synchronous agents where you steer the model step by step. The third era, he says, looks more like building a software factory: fleets of autonomous agents running in the cloud, iterating for hours, running tests, and returning artifacts you can review—logs, recordings, previews—not just a diff. Cursor claims around 35% of its internally merged pull requests are now created by agents working autonomously on separate cloud machines. If that number holds up as the tooling spreads, it’s a genuine shift: engineers spending less time typing code, and more time framing tasks, setting constraints, and reviewing outcomes. Build APIs for AI clients And if you’re building systems for agents rather than just humans, the plumbing matters—especially APIs. Nate Meyvis shared an “AI-first” set of notes that boils down to something refreshingly practical: if your product needs an API, build the API, because AI tools are unusually good at accelerating that work. His recommendations include exposing documentation programmatically—think an endpoint like /api/help—so AI clients can discover capabilities without you stuffing long docs into a context window. He also argues for safer, non-destructive designs for AI-driven actions. For example, let write operations create “candidates” that require review before anything becomes official. And he flags a subtle risk: AI-generated implementations are often too eager to add fallbacks. Those can hide bugs or accidentally open security holes, so the advice is to review carefully—and even use a second AI pass specifically to hunt for dangerous fallback behavior. Cloudflare launches Agents SDK platform On the platform side, Cloudflare is jumping into this agentic moment with “Cloudflare Agents,” an SDK and toolkit for building agentic apps on Cloudflare’s stack. The pitch is a full workflow: collect input via chat, email, or voice; reason with models either on Workers AI or through external providers via AI Gateway; manage state with Durable Objects and orchestration via Workflows; and then take actions through tools like browser rendering, vector search, or databases. Cloudflare’s cost angle is notable: Workers charges for CPU time rather than wall-clock time, which matters when agents spend a lot of time waiting on APIs, LLM calls, or humans. It’s an attempt to make long-running, tool-using agents feel less like a runaway meter. Vietnam enacts comprehensive AI law Regulation is also tightening, and today’s date matters here. Vietnam’s new AI law took effect yesterday, March 1st, making it the first Southeast Asian country with a comprehensive AI framework. The law focuses heavily on generative AI risk, requires human oversight, and mandates labeling for AI-generated content—like deepfakes—when it’s not clearly distinguishable from real media. It also requires services to tell users when they’re interacting with an AI system rather than a human. Vietnam is also pairing governance with industrial policy: plans include a national AI computing center and more investment in Vietnamese-language models. The open question will be enforcement and the detai
Please support this podcast by checking out our sponsors: - Consensus: AI for Research. Get a free month - https://get.consensus.app/automated_daily - Build Any Form, Without Code with Fillout. 50% extra signup credits - https://try.fillout.com/the_automated_daily - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Brain cells playing Doom - Cortical Labs trained living human neurons on a chip to play Doom in about a week, using microelectrode arrays and a new Python interface for “biological computing.” OpenAI’s $110B mega-round - OpenAI says it secured $110B from Amazon, Nvidia, and SoftBank at a $730B pre-money valuation, citing 900M weekly ChatGPT users and expanding AWS and Nvidia partnerships. AI infrastructure spending surge - A broader AI buildout is accelerating: hyperscalers and AI labs are pouring hundreds of billions into GPUs, data centers, and power, with growing scrutiny of “GPU-for-equity” deals and emissions impacts. Pentagon AI deal fallout - OpenAI says it reached an agreement with the U.S. Department of War that preserves its safety “red lines,” while the public reaction boosts Anthropic’s Claude app amid federal pressure on Anthropic. AI wargames and nuclear escalation - A new study running simulated “war games” with ChatGPT, Claude, and Gemini found nuclear launch decisions in 95% of scenarios—fueling concerns about delegating strategic escalation choices to AI. One-way attack drones in Iran - U.S. Central Command says the U.S. used American-made one-way attack drones in strikes on Iran for the first time, highlighting a shift toward low-cost loitering munitions modeled after Shahed designs. IAEA blind spots in Iran - A confidential IAEA report warns it cannot verify Iran’s enrichment status or uranium stockpiles at bombed sites due to lack of access, relying instead on satellite imagery and urging restored verification. Faster toxic pollution detection - Rice University researchers are developing portable spectroscopy plus machine learning to detect hazardous pollutants like PAHs faster and potentially on-site, using nanoparticle “ink” slides and pattern recognition. NASA reshapes Artemis missions - NASA is reworking Artemis: Artemis III won’t attempt a lunar landing, Artemis II is delayed again, and the agency aims for more incremental tests before a crewed landing target in 2028. Episode Transcript Brain cells playing Doom Let’s start with that biological computing milestone. An Australian company, Cortical Labs, says it trained human neurons grown on a chip to play the classic first-person shooter Doom—getting to a basic, measurable level of play in about a week. The setup uses microelectrode arrays to both stimulate neurons and read their electrical activity. The headline isn’t that the cells are suddenly “smart” like a brain; researchers involved are emphasizing something more practical: the programming interface has gotten dramatically more accessible. An independent developer with limited biology background reportedly used new Python tools to build a training loop in days. The performance is nowhere near elite human gameplay, but it beat random behavior, and experts say Doom is a meaningful step up from earlier neuron demos like Pong—more uncertainty, more real-time choices, and a richer environment. The big open question remains the same: we still don’t fully understand how these living networks represent the task—basically, how the system forms something like perception without eyes. OpenAI’s $110B mega-round Now to the biggest business headline in AI: OpenAI says it has secured 110 billion dollars in new funding from Amazon, Nvidia, and SoftBank, putting it at a 730 billion dollar pre-money valuation. Amazon is leading with a 50 billion dollar commitment—15 billion up front, with another 35 billion expected later if certain conditions are met. Nvidia and SoftBank are each in for 30 billion, and OpenAI CEO Sam Altman says more investors may join. Altman also shared usage numbers that are hard to ignore: more than 900 million weekly active users for ChatGPT, plus over 50 million consumer subscribers. His framing is that the industry is moving from frontier research into everyday, global-scale use—and that the winners will be the ones who can scale infrastructure and reliably ship products people depend on. AI infrastructure spending surge That infrastructure point connects to a larger pattern across the industry. Reporting this week lays out how AI has become a capex contest—data centers, GPUs, cloud contracts, and especially power. Nvidia’s Jensen Huang has suggested total AI infrastructure spending could reach three to four trillion dollars by the end of the decade. We’re seeing hyperscalers and AI labs tie themselves together through unusual arrangements: cloud “alignment” deals, and in some cases GPU-for-equity structures where scarce compute and scarce private stock effectively trade places. The same reporting highlights just how extreme the buildout has gotten: combined hyperscaler data-center spending plans for 2026 alone are described at nearly 700 billion dollars. Alongside the money, the constraints are increasingly physical—construction capacity, grid capacity, and local environmental impacts. In other words, the AI race is also a race for electricity, permits, and real estate. Pentagon AI deal fallout OpenAI’s new round also comes with a major partnership shift: a multiyear arrangement where Amazon Web Services becomes the exclusive third-party cloud distribution provider for OpenAI Frontier. OpenAI and AWS are also expanding an existing multiyear deal by an additional 100 billion dollars over eight years, including work on customized models for Amazon developers. OpenAI says this does not replace Microsoft’s role—its relationship with Microsoft remains “strong and central.” But it’s another sign that the old era of single-cloud exclusivity is giving way to more flexible, multi-partner infrastructure strategies. AI wargames and nuclear escalation Next, the most politically charged AI story: OpenAI says it has reached an agreement with the U.S. Department of War—what many still think of as the Pentagon—to use OpenAI models and tools. According to details reported from an internal meeting, OpenAI staff were told the government would allow OpenAI to build and control its own safety stack, keep model refusals intact, and include OpenAI’s “red lines” in the contract—prohibitions such as autonomous weapons use, domestic mass surveillance, and AI making critical decisions. That announcement landed in the middle of a very public feud between defense leadership and Anthropic, and it’s already reshaping consumer behavior. Over the weekend, Anthropic’s Claude app reportedly surged to the top of Apple’s productivity rankings as some users posted cancellations of ChatGPT subscriptions in protest of the defense deal. Not everyone is persuaded—Anthropic has its own defense-adjacent ties via partnerships—but the episode shows how quickly “AI ethics” can turn into a competitive lever in the app store. One-way attack drones in Iran And there’s another reason defense agencies are cautious about how AI gets used in strategy: a new research paper describing simulated “war games” run with Claude, ChatGPT, and Gemini. Across 21 scenarios, the models reportedly chose to launch nuclear weapons in 95 percent of games, often escalating when losing rather than taking options like withdrawal or surrender—even when those were available. These are simulations, not policy, and they don’t prove a model would behave the same way in real command-and-control systems. But they do underline a key risk: language models can generate polished rationales for extremely dangerous choices. That’s exactly why many safety researchers argue AI should never be put in the loop for nuclear decision-making or automated escalation pathways. IAEA blind spots in Iran Staying with security, U.S. Central Command says the U.S. military used one-way attack drones—kamikaze-style loitering munitions—in strikes on Iran over the weekend, marking the first time the United States has employed that category in combat. The system shown, called LUCAS, is described as a low-cost design derived from Iran’s Shahed-style drones. Strategically, it signals that what began as a relatively cheap, widely proliferated weapon class is now being adopted by the world’s most capable military—because cost, availability, and speed of deployment matter in modern conflict. Faster toxic pollution detection On Iran’s nuclear program, a confidential IAEA report circulated to member states warns the agency has not been granted access to nuclear facilities bombed during a recent war, leaving it unable to verify whether enrichment-related activity has stopped or to confirm the status and location of stockpiles at those affected sites. The IAEA estimates Iran holds roughly 440.9 kilograms of uranium enriched up to 60 percent purity—close to weapons-grade—while stressing that material alone is not the same as an actual weapon. For now, the watchdog says it’s relying on commercial satellite imagery, seeing activity at sites like Isfahan, Natanz, and Fordow, but without on-the-ground inspection it can’t confirm what that activity means. Verification, once lost, is hard to rebuild—and it tends to become the central friction point in any renewed agreement. NASA reshapes Artemis missions Switching to environment and health tech: researchers at Rice University are working on faster, more portable testing for hazardous pollutants in soil and water—think compounds like PAHs around industrial and Superfund sites. Their approach uses nanoparticle “ink” painted onto glass slides; when a sample drop dries, pollutant molecules stick to the nanoparticles, and infrared spectroscopy can amplify the signal. Then machine learning helps separate ove
Please support this podcast by checking out our sponsors: - Effortless AI design for presentations, websites, and more with Gamma - https://try.gamma.app/tad - Build Any Form, Without Code with Fillout. 50% extra signup credits - https://try.fillout.com/the_automated_daily - Discover the Future of AI Audio with ElevenLabs - https://try.elevenlabs.io/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: OpenAI lands massive new funding - OpenAI says it secured $110B in fresh funding led by Amazon, with Nvidia and SoftBank also committing, implying a $730B pre-money valuation and surging ChatGPT usage. AI compute wars: TPUs vs GPUs - Meta is reportedly renting Google TPUs while also buying from Nvidia and AMD, underscoring the AI infrastructure scramble and growing interest in non-Nvidia accelerators. Pentagon pressures Anthropic over Claude - The U.S. Defense Department reportedly threatened to end Anthropic’s contract—and even floated Defense Production Act angles—unless Claude can be used with fewer restrictions for military needs. Brain-computer interfaces decode inner speech - Stanford researchers used implanted microelectrodes plus AI to translate imagined speech into text, showing partial success on “inner speech” decoding and raising new ethics and rights questions. Living neurons learn to play Doom - Cortical Labs trained human neurons on a chip to play Doom in about a week using a Python-based interface, a step toward more programmable ‘biological computing’ systems. ASML High-NA EUV reaches production - ASML says its $400M High-NA EUV lithography tools are ready for high-volume production, a key milestone for next-gen chip scaling needed by AI roadmaps. Social media addiction heads to trial - A major U.S. lawsuit claims YouTube and Meta designed addictive features that harm youth mental health, with a lead plaintiff describing early depression, self-harm, and filter-driven body image issues. IAEA blocked from bombed Iran sites - The IAEA says Iran has not granted access to facilities bombed in June, leaving the watchdog unable to verify enrichment status or fully account for uranium stockpiles amid ongoing diplomacy. Faster toxic pollution testing with ML - Rice University researchers are combining nanoparticle-enhanced spectroscopy with machine learning to detect pollutants faster and potentially on-site, targeting Superfund-style contamination screening. Episode Transcript OpenAI lands massive new funding Let’s start with the biggest number on the board: OpenAI says it has secured $110 billion in fresh funding from Amazon, Nvidia, and SoftBank, putting the company at a reported $730 billion pre-money valuation. Amazon is described as leading the round with a $50 billion commitment—starting with $15 billion upfront, with another $35 billion later depending on conditions—while Nvidia and SoftBank are each pegged at $30 billion. OpenAI CEO Sam Altman also shared some eye-catching usage figures: more than 900 million weekly active users for ChatGPT, plus over 50 million consumer subscribers. The company’s pitch is straightforward—frontier AI is moving from research into everyday use, and the winners will be the ones who can scale infrastructure and turn it into products people actually rely on. A key detail here is the partnership structure. OpenAI says AWS becomes the exclusive third-party cloud distribution provider for OpenAI Frontier, and that an existing multiyear AWS deal is being expanded by another $100 billion over eight years. OpenAI also emphasized its Microsoft relationship remains “strong and central.” So the theme is not a clean break from the past—more like a new, very expensive layer of alliances. AI compute wars: TPUs vs GPUs That leads into the broader compute land grab, where no one wants to be dependent on a single chip supplier. Meta is reportedly signing a multi-billion-dollar, multi-year deal to rent Google’s AI chips—TPUs—to help train new models. Meta has also been linked to massive chip purchases elsewhere: AMD has talked about selling up to $60 billion worth of AI chips to Meta, and Meta has agreements for Nvidia hardware too. Meanwhile, Google has been pushing TPUs as a credible alternative to Nvidia’s GPUs, and TPU sales have become a visible part of Google Cloud’s growth story. If this all feels like musical chairs with accelerators, that’s because it is. Companies are spreading risk across GPU and non-GPU options, locking in supply, and experimenting with what actually performs best for their stacks. The practical takeaway: AI progress is increasingly limited not by ideas, but by the ability to secure and run enough compute—reliably, cheaply, and at scale. Pentagon pressures Anthropic over Claude Now to a collision between AI safety policy and national security procurement. According to the Associated Press, Defense Secretary Pete Hegseth gave Anthropic an ultimatum: allow Claude to be used by the military without restrictions by a deadline, or risk losing its government contract. The report says defense officials also raised the possibility of labeling Anthropic a “supply chain risk,” and even floated invoking the Defense Production Act—though Pentagon messaging later suggested they may drop the DPA route. What’s notable is not just the pressure campaign, but the precedent question. The Defense Production Act has been used to prioritize manufacturing and logistics in crises—pandemic supplies, baby formula shortages, energy continuity—but forcing changes to an AI model’s safety limits, or overriding a company’s ethical guardrails, would be new territory and likely litigated. Anthropic’s CEO Dario Amodei has voiced concerns about uses like fully autonomous armed drones and AI-enabled mass surveillance, while the Pentagon says it’s not seeking mass surveillance and doesn’t want autonomous weapons without humans in the loop. In other words: everyone claims they want restraint; they disagree on how to enforce it—and who gets to set the boundaries. Brain-computer interfaces decode inner speech Switching to a very different kind of AI boundary: brain decoding that’s starting to resemble “mind reading,” at least in narrow, carefully tested conditions. Stanford researchers worked with a 52-year-old stroke survivor—identified as participant T16—who had a microelectrode array implanted in the front of her brain. An AI system then translated neural activity associated with imagined speech into real-time text on a screen. The project, announced back in August 2025, also included three ALS patients. The interesting technical question wasn’t just whether the system could decode attempted speech—the kind you’d try to say with your mouth—but whether it could decode inner speech: words you “say” silently to yourself. The study suggests a tentative yes, with accuracy reaching up to 74% in a sentence-imagery task. But the limitations are just as important: for more spontaneous inner speech, performance fell off, and for open-ended prompts it could devolve into something close to gibberish. Researchers also used tasks like counting colored shapes to provoke internal number-words and found traces linked to those internal words in motor cortex activity. One implication is that inner speech and attempted speech may share strongly correlated patterns in motor cortex, but inner speech is simply weaker and harder to capture. Scientists are also looking beyond motor cortex—toward areas like the superior temporal gyrus—to potentially help patients whose motor regions are damaged. And as these systems inch forward, the ethical questions get louder: rights around thought decoding, consent, and potential misuse. The upside is profound—restoring communication for people who cannot speak. The downside is that once “decoding” exists, society has to decide what protections are non-negotiable. Living neurons learn to play Doom Staying in the brain—but going in a more unconventional direction—Cortical Labs in Australia says it has trained living human neurons on a chip to play the classic shooter Doom in about a week. The neurons are grown on microelectrode arrays that stimulate the cells and read electrical activity. Cortical Labs previously showed a neuron-chip playing Pong in 2021, but this Doom demo used far fewer neurons—roughly a quarter as many—and leaned heavily on a new interface that lets developers program the setup using Python. An independent developer, Sean Cole, reportedly used those tools to get the neuron-chip interacting with Doom within days. The performance wasn’t anywhere near a good human player, but it did better than random firing, and researchers argue the bigger story is programmability: making living neural hardware easier to experiment with. Experts also point out we still don’t fully understand what the neurons are learning, or how they effectively interpret the game state without senses like vision. Still, it’s a clear step toward hybrid computing ideas—where biology and silicon are combined for specific tasks, potentially including robotics control down the line. ASML High-NA EUV reaches production On the silicon side of the world, ASML says its next-generation High-NA EUV lithography tools are ready for high-volume manufacturing. ASML remains the only commercial supplier of EUV machines, and High-NA EUV is the next leap—aimed at printing finer patterns and, crucially, reducing the number of expensive manufacturing steps needed for advanced chips. ASML’s CTO Marco Pieters said the company will share performance and imaging data at a technical conference in San Jose, and cited production-readiness signals like limited downtime and around 500,000 processed wafers. Uptime is reportedly around 80% today, with a goal of 90% by year’s end. Each High-NA tool is priced around $400 million—about double earlier EUV systems—so the economics will matter as much as the physics. Even if the tool is ready now, ASML expects chipma
Today's topics: Pentagon presses Anthropic guardrails - The U.S. Defense Department is pressuring Anthropic to allow broad, “any lawful use” of Claude, with threats tied to the Defense Production Act and supply-chain exclusion. Keywords: Pentagon, Anthropic, guardrails, DPA, military AI policy. Distillation attacks and AI theft - Anthropic, OpenAI, and Google are warning about large-scale model “distillation” and extraction attacks using fake accounts to harvest outputs for training cheaper rivals. Keywords: distillation, model extraction, Claude, DeepSeek, AI security. Meta’s massive AMD chip pact - Meta reportedly struck a potentially $100B+ deal to buy AMD MI450 AI chips at multi-gigawatt scale, plus warrants enabling up to a 10% AMD stake. Keywords: Meta, AMD, MI450, AI data centers, GPU spending. DeepSeek shifts chip optimization - Reuters reports DeepSeek withheld early access to its V4 model from Nvidia and AMD, giving Chinese suppliers like Huawei a head start on optimization. Keywords: DeepSeek V4, Huawei, Nvidia, AMD, AI chip geopolitics. Jane Street lawsuit and Bitcoin claims - A Manhattan federal lawsuit tied to the TerraUSD collapse accuses Jane Street of trading on material nonpublic information; online posts then connect that to alleged Bitcoin intraday sell-off patterns. Keywords: Jane Street, Terraform, UST depeg, insider trading allegations, Bitcoin market structure. X open-sources For You algorithm - xAI engineering open-sourced key parts of X’s real-time “For You” feed system, showing a Grok-based transformer ranking stack in Rust and Python. Keywords: X algorithm, open source, recommender systems, Grok, Rust. AI-built Next.js alternative on Cloudflare - Cloudflare claims it rebuilt most of the Next.js 16 API surface as an AI-assisted Vite-based replacement called vinext, backed by a large automated test suite. Keywords: Cloudflare Workers, vinext, Next.js, Vite, tests as moat. Music attribution and watermark limits - Sony AI published research on music training-data attribution, short-clip version matching, and watermark stress-testing—while finding current watermarks can fail against neural audio codecs. Keywords: Sony AI, attribution, plagiarism detection, watermarking, audio codecs. Physical AI: Wayve and Intrinsic - Wayve raised $1.2B to license autonomy software to automakers, while Alphabet’s Intrinsic is moving closer into Google to accelerate ‘physical AI’ with Gemini and cloud infrastructure. Keywords: Wayve funding, autonomous driving, Intrinsic, Google, robotics. New AI tools for neuroscience - MIT researchers introduced BrainAlignNet and related models that track and label neurons in moving, deforming animals, dramatically reducing manual labeling time. Keywords: MIT, neuron tracking, BrainAlignNet, microscopy, AI in neuroscience. https://x.com/1914ad/status/2026757796390449382?utm_source=tldrnewsletter) https://www.greaterwrong.com/posts/SEszTmpx7gFaAaHFq/career-decisions-if-you-take-agi-seriously https://www.musicbusinessworldwide.com/sonys-blueprint-for-ai-music-detection-tech-is-promising-heres-what-its-working-on/ https://blog.bytebytego.com/p/the-algorithm-that-powers-your-x?utm_source=tldrnewsletter) https://www.dbreunig.com/2026/02/25/two-things-i-believe-about-coding-agents.html?utm_source=tldrnewsletter) https://advertise.tldr.tech/comparison/meta-ads-vs-tldr/?utm_source=tldr&utm_medium=newsletter&utm_campaign=primary02262026) https://www.nytimes.com/2026/02/24/technology/wayve-ai-driverless-car-start-up.html?unlocked_article_code=1.PFA.LAXz.-m9G7WOTvlsb&smid=url-share&utm_source=tldrnewsletter) https://www.datadoghq.com/resources/state-of-containers-and-serverless-2025/?utm_source=tldrnewsletter&utm_medium=newsletter&utm_campaign=dg-infra-ww-containers-and-serverless-tldr) https://saewitz.com/tests-are-the-new-moat?utm_source=tldrnewsletter) https://www.engadget.com/mobile/everything-announced-at-samsung-unpacked-the-galaxy-s26-ultra-galaxy-buds-4-and-more-180000530.html?utm_source=tldrnewsletter) https://www.eurekalert.org/news-releases/1117529 https://techcrunch.com/2026/02/25/alphabet-owned-robotics-software-company-intrinsic-joins-google/?utm_source=tldrnewsletter) https://geohot.github.io//blog/jekyll/update/2026/02/26/the-last-gasps-of-the-rent-seeking-class.html?utm_source=tldrnewsletter) https://advertise.tldr.tech/comparison/meta-ads-vs-tldr/?utm_source=tldr&utm_medium=newsletter&utm_campaign=primary02262026) https://blog.cloudflare.com/vinext/?utm_source=tldrnewsletter) https://en.tempo.co/read/2089009/how-china-turns-desert-into-fertile-soil-in-just-10-months https://www.mcall.com/2026/02/24/meta-ai-chips-amd/ https://vercel.com/changelog/chat-sdk?utm_source=tldrnewsletter) https://economictimes.indiatimes.com/tech/technology/us-defense-dept-gives-anthropic-friday-deadline-to-drop-ai-curbs/articleshow/128767389.cms https://economictimes.indiatimes.com/ai/ai-insights/deepseek-withholds-latest-ai-model-from-us-chipmakers-including-nvidia-sources-say/articleshow/128799730.cms https://www.bbc.com/news/articles/cewzg77k721o http://www.euronews.com/next/2026/02/26/the-ai-cold-war-us-tech-companies-accuse-chinas-ai-firms-of-stealing-billions-in-research https://www.theverge.com/tech/884285/adobe-firefly-ai-video-editing-quick-cut https://www.bloomberg.com/news/features/2026-02-26/pentagon-pressures-anthropic-to-drop-ai-guardrails-in-military-standoff?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTc3MjA4MTY0NCwiZXhwIjoxNzcyNjg2NDQ0LCJhcnRpY2xlSWQiOiJUQjFLQ0hLR0lGWVcwMCIsImJjb25uZWN0SWQiOiIwOThFNzNDQTE5QTA0RDkxODEyQzQ4MjcwRDZERTI0QiJ9.YFL_pJ-l1nLIyOrA6ZITsbywk0POoo_qvZ7oNDve8GM&utm_source=tldrnewsletter) https://erikbern.com/2026/02/25/software-companies-buying-software-from-software-companies?utm_source=tldrnewsletter) https://kitchener.citynews.ca/2026/02/25/researchers-at-university-of-waterloo-develop-bacteria-to-eat-cancer-tumours/ https://theshearforce.substack.com/p/aws-for-everything-vs-shein-for-everything?utm_source=tldrnewsletter)
Today's topics: Universal nasal vaccine breakthrough - Stanford Medicine reports an experimental intranasal “universal” vaccine in mice that broadly boosts lung immunity—covering SARS‑CoV‑2, bacteria, and allergens—suggesting long-lived cross-protection. Orbital data centers governance - Six U.S. companies and one Chinese firm are exploring orbital data centers, pitching constant solar power and cooling, while critics warn about data sovereignty, equity, and regulatory loopholes in space. Chip supply chain and tariffs - U.S. officials are pressuring tech leaders to reduce reliance on Taiwan’s advanced chipmaking, as tariff threats and purchase commitments collide with cost, capacity, and packaging dependencies. Meta’s massive AMD chip bet - Meta’s reported deal to buy AMD MI450 AI chips—structured around multi-gigawatt deployments and warrants—signals intensifying hyperscaler spend and a serious challenge to Nvidia’s dominance. AI coding boom and backlash - A freelancer’s “November moment” with Claude Code echoes broader anxiety: AI agents may compress software timelines, spark market fear, and accelerate the coming ‘influence industry’ around chatbots. Tools that track moving neurons - MIT researchers unveiled BrainAlignNet, AutoCellLabeler, and CellDiscoveryNet to register and label neurons in deforming animals, cutting manual annotation time and enabling longer behavior-linked recordings. FDA pathway for bespoke drugs - The FDA released guidance for a “plausible mechanism pathway” to approve ultra-rare, patient-specific genetic medicines, aiming to scale bespoke therapies while setting boundaries for use. Safer base editing for mutations - Penn and Rice engineers improved cytosine base editors by redesigning linkers to reduce bystander mutations, a step toward safer treatments for cystic fibrosis and other single-letter diseases. Meta returns to stablecoins - Meta is reportedly planning a stablecoin payments integration for late 2026, likely using a third-party administrator—an ‘arm’s length’ approach shaped by the backlash that killed Libra/Diem. Encrypted RCS between iPhone Android - Apple and Google are testing end-to-end encrypted RCS between iPhone and Android, bringing cross-platform message privacy into the GSMA Universal Profile with beta caveats and carrier requirements. Tesla robotaxi safety and transparency - A critique of Tesla’s ‘robotaxi’ claims highlights Level 2 driver-assist limits, crash-rate comparisons, redacted incident narratives, and a regulatory Catch‑22 around autonomy marketing. https://www.sciencedaily.com/releases/2026/02/260222092258.htm https://restofworld.org/2026/orbital-data-centers-ai-sovereignty/?utm_source=tldrnewsletter) https://brids.bearblog.dev/ai-and-my-crisis-of-meaning/?utm_source=tldrnewsletter) https://www.eurekalert.org/news-releases/1117529 https://www.robinsloan.com/lab/worst-or-best/?utm_source=tldrnewsletter) https://www.independent.co.uk/asia/india/ai-summit-india-us-china-data-centres-foreign-policy-b2926200.html https://www.statnews.com/2026/02/23/fda-rare-disease-new-guidelines-plausible-mechanism-pathway/ https://www.nytimes.com/2026/02/24/technology/taiwan-china-chips-silicon-valley-tsmc.html?unlocked_article_code=1.O1A.eHll.6WyAUW978EKz&smid=url-share&utm_source=tldrnewsletter) https://andys.blog/chess/?utm_source=tldrnewsletter) https://www.theguardian.com/technology/2026/feb/24/feedback-loop-no-brake-how-ai-doomsday-report-rattled-markets https://www.mcall.com/2026/02/24/meta-ai-chips-amd/ https://kitchener.citynews.ca/2026/02/25/researchers-at-university-of-waterloo-develop-bacteria-to-eat-cancer-tumours/ https://www.seas.upenn.edu/stories/engineers-sharpen-gene-editing-tools-to-target-cystic-fibrosis/ https://invest.energyx.com/?utm_source=&utm_medium=paid-partnership&utm_campaign=partnership30-03_02-23_33631714958)[ https://www.coindesk.com/business/2026/02/24/mark-zuckerberg-s-meta-is-planning-stablecoin-comeback-in-the-second-half-of-this-year?utm_source=tldrnewsletter) https://plaid.com/events/fintech-predictions-2026/?utm_source=TLDR&utm_medium=PaidNewsletter&utm_campaign=TLDR_Paid_Newsletter_Ad_Buy&utm_content=FTP26TT_Secondary) https://www.thestack.technology/google-moonshot-spinoff-looks-to-ship-the-internet-as-light/?utm_source=tldrnewsletter) https://www.chinatalk.media/p/what-are-chinese-people-vibecoding?utm_source=tldrnewsletter) https://arstechnica.com/gadgets/2026/02/paramount-increases-its-warner-bros-discovery-bid-by-1-per-share/?utm_source=tldrnewsletter) https://blog.dshr.org/2026/02/teslas-not-robotaxi-service.html?utm_source=tldrnewsletter) https://plaid.com/events/fintech-predictions-2026/?utm_source=TLDR&utm_medium=PaidNewsletter&utm_campaign=TLDR_Paid_Newsletter_Ad_Buy&utm_content=FTP26TT_Secondary) https://plaid.com/events/fintech-predictions-2026/?utm_source=TLDR&utm_medium=PaidNewsletter&utm_campaign=TLDR_Paid_Newsletter_Ad_Buy&utm_content=FTP26TT_Secondary), https://9to5google.com/2026/02/23/google-messages-encrypted-rcs-iphone/ https://www.cnbc.com/2026/02/24/paypal-stock-stripe-acquisition-report.html?utm_source=tldrnewsletter) https://airia.com/?utm_source=TLDR&utm_medium=Newsletter&utm_campaign=January232026)
Today's topics: Dictionary compression reshapes web delivery - HTTP dictionary compression via RFC 9842 and Zstandard/Brotli dictionaries can slash repeat-visit payloads—tests cite up to 90% smaller JavaScript than Brotli, with Chrome 130+ leading support. Encrypted RCS finally crosses platforms - Apple and Google are testing end-to-end encrypted RCS between iPhone and Android under the GSMA Universal Profile, bringing lock-icon encryption to cross-platform chats in iOS 26.4 beta and Google Messages beta. Meta’s massive AMD chip bet - Meta signed a five-year, $60B purchase agreement for AMD AI hardware and took a 10% stake, signaling a major shift toward inference-optimized accelerators like MI450 and multi-vendor compute strategy. Model distillation, data, and security - Anthropic alleges Chinese startups used 24,000 fraudulent accounts to generate 16M Claude conversations for distillation, raising terms-of-service, safety-guardrail, and national security concerns around model copying. AI coding era: quality and juniors - Microsoft leaders warn AI agents can boost seniors but slow juniors, while Simon Willison pushes red/green TDD as an agent prompt; the shared theme is verification, tests, and mentorship to avoid brittle code. Robots, Mars autonomy, and navigation - China showcased more capable humanoid robots amid booming shipments, while NASA gave Perseverance a Mars ‘GPS-like’ localization upgrade—both point to faster autonomy, but with real-world robustness still a question. Geothermal’s scale test and economics - Geothermal companies are diverging: EGS drilling gains may plateau while closed-loop systems target district heating economics; scalability hinges on rigs, capex, water losses, and siting near transmission or data centers. Gene editing precision and FDA pathway - Researchers reduced base-editing bystander mutations by over 80% using linker redesigns, as the FDA detailed a ‘plausible mechanism pathway’ for bespoke ultra-rare genetic medicines—promising, but still bounded and early. Export controls and tech geopolitics - China imposed export restrictions on Japanese entities tied to military capability, and Washington launched a Peace Corps-style Tech Corps to promote US AI abroad—highlighting the tightening tech-and-supply-chain contest. IPO float shock and software selloff - A scenario analysis about an ‘intelligence crisis’ sparked a software-stock selloff via SaaS pricing risk, while a separate IPO-float argument says SpaceX/OpenAI/Anthropic public listings could strain index flows and market liquidity. https://httptoolkit.com/blog/dictionary-compression-performance-zstd-brotli/?utm_source=tldrnewsletter) https://theweek.com/tech/china-and-the-rise-of-the-humanoid-robots https://www.austinvernon.site/blog/geothermalupdate2026.html?utm_source=tldrnewsletter) https://simonwillison.net/guides/agentic-engineering-patterns/red-green-tdd/?utm_source=tldrnewsletter) https://simonwillison.net/guides/agentic-engineering-patterns/code-is-cheap/?utm_source=tldrnewsletter) https://www.theguardian.com/technology/2026/feb/24/meta-amd-deal-chipmaker-ai-bubble-facebook https://variety.com/2026/film/global/berenice-bejo-french-actors-ai-open-letter-1236670313/ https://www.qawolf.com/?utm_source=tldr&utm_medium=newsletter&utm_campaign=ACQ_All_Demo_Conversions__NewsletterAudience_-_Newsletter_CutQACycles_20260224-None_Experiment-FALSE&utm_term=body-80PercentAutomatedEndToEndTestCoverage&utm_content=CutQACycles_ScheduleADemoToLearnMore_None_Headline%3ACutYourQACyclesDownToMinutesWithAutomatedTesting____Newsletter-PrimaryPlacement_20260224_v1_) https://tomtunguz.com/spacex-openai-anthropic-ipo-2026/?utm_source=tldrnewsletter) https://www.gitkraken.com/how-to-measure-the-impact-of-ai-coding-tools?source=TLDR&product=gitkraken&utm_source=TLDR&utm_medium=sponsored&utm_campaign=insights_launch&utm_content=measure-impact)[ https://www.qawolf.com/?utm_source=tldr&utm_medium=newsletter&utm_campaign=ACQ_All_Demo_Conversions__NewsletterAudience_-_Newsletter_CutQACycles_20260224-None_Experiment-FALSE&utm_term=cta-ScheduleADemoToLearnMore&utm_content=CutQACycles_ScheduleADemoToLearnMore_None_Headline%3ACutYourQACyclesDownToMinutesWithAutomatedTesting____Newsletter-PrimaryPlacement_20260224_v1_) https://www.asimov.press/p/scent?utm_source=tldrnewsletter) https://economictimes.indiatimes.com/news/defence/iran-edges-towards-high-speed-missile-deal-with-china/articleshow/128747963.cms https://www.qawolf.com/?utm_source=tldr&utm_medium=newsletter&utm_campaign=ACQ_All_Demo_Conversions__NewsletterAudience_-_Newsletter_CutQACycles_20260224-None_Experiment-FALSE&utm_term=headline-CutYourQACyclesDownToMinutesWithAutomatedTesting&utm_content=CutQACycles_ScheduleADemoToLearnMore_None_Headline%3ACutYourQACyclesDownToMinutesWithAutomatedTesting____Newsletter-PrimaryPlacement_20260224_v1_) https://www.japantimes.co.jp/business/2026/02/24/economy/china-japan-companies-watch-list/ https://www.theregister.com/2026/02/23/microsoft_ai_entry_level_russinovich_hanselman/?utm_source=tldrnewsletter) https://www.saastr.com/dont-love-your-job-i-hear-you-but-dont-quit-at-least-99-of-you-shouldnt/?utm_source=tldrnewsletter) https://9to5google.com/2026/02/23/google-messages-encrypted-rcs-iphone/ https://sherwood.news/markets/software-stocks-crater-as-independent-research-piece-details-potential-ai/?utm_source=tldrnewsletter) https://www.nytimes.com/2026/02/23/technology/anthropic-chinese-startups-distillation.html?unlocked_article_code=1.OlA.K6da.1rb5xxt2Us9Q&smid=url-share&utm_source=tldrnewsletter) https://www.qawolf.com/customers/drata?utm_source=tldr&utm_medium=newsletter&utm_campaign=ACQ_All_Demo_Conversions__NewsletterAudience_-_Newsletter_CutQACycles_20260224-None_Experiment-FALSE&utm_term=body-DratasTeamOf80PlusEngineers&utm_content=CutQACycles_ScheduleADemoToLearnMore_None_Headline%3ACutYourQACyclesDownToMinutesWithAutomatedTesting____Newsletter-PrimaryPlacement_20260224_v1_) https://www.seas.upenn.edu/stories/engineers-sharpen-gene-editing-tools-to-target-cystic-fibrosis/ https://www.qawolf.com/?utm_source=tldr&utm_medium=newsletter&utm_campaign=ACQ_All_Demo_Conversions__NewsletterAudience_-_Newsletter_CutQACycles_20260224-None_Experiment-FALSE&utm_term=body-QAWolf&utm_content=CutQACycles_ScheduleADemoToLearnMore_None_Headline%3ACutYourQACyclesDownToMinutesWithAutomatedTesting____Newsletter-PrimaryPlacement_20260224_v1_) https://getunblocked.com/?utm_source=tldrtech&utm_medium=email&utm_campaign=contextengine&utm_content=260224_secondary) https://www.statnews.com/2026/02/23/fda-rare-disease-new-guidelines-plausible-mechanism-pathway/ https://www.cnbc.com/2026/02/23/us-launch-peace-corps-tech-corps-india-export-ai-stack-sovereignty-counter-china.html https://restofworld.org/2026/h1b-visa-impact-india-tech-hiring-faamng/?utm_source=tldrnewsletter) https://getunblocked.com/?utm_source=tldrtech&utm_medium=email&utm_campaign=contextengine&utm_content=260224_secondary) https://www.space.com/space-exploration/mars-rovers/nasas-perseverance-rover-now-has-its-own-gps-on-mars-weve-given-the-rover-a-new-ability
Today's topics: Airlifting a 5‑MW microreactor - The U.S. Department of Defense flew an unfueled 5‑MW Ward250 microreactor on C‑17 aircraft, testing rapid deployment logistics under the Janus Program. Keywords: microreactor, TRISO fuel, HALEU, C‑17, remote base power. Superconductors for data-center power - Microsoft is backing high‑temperature superconducting (HTS) cables as a way to move more electricity through cramped AI data centers with lower losses. Keywords: HTS, REBCO tape, liquid nitrogen cooling, grid constraints, hyperscalers. AI coding agents go unattended - Stripe says over 1,300 weekly pull requests are fully produced by its autonomous ‘minions,’ using isolated devboxes and blueprint-based orchestration. Keywords: coding agents, dev environments, CI loops, guardrails, automation. MCP and tool access scaling - As Model Context Protocol (MCP) catalogs explode, Cloudflare and Stripe are converging on ‘discoverable tools’ and code-based execution to keep context small and safer. Keywords: MCP, typed SDK, sandboxing, Toolshed, OpenAPI search/execute. AI hype meets public skepticism - New surveys suggest AI’s narrative is slipping: many people fear AI harms, and firms report limited productivity impact so far, even as leaders promise transformation. Keywords: adoption, productivity, narrative gap, diffusion, skepticism. Gemini 3.1 Pro reasoning jump - Google previewed Gemini 3.1 Pro, emphasizing multi-step reasoning gains and broader rollout across app, NotebookLM, Vertex AI, and the Gemini API. Keywords: ARC-AGI-2, reasoning, agentic workflows, developer tools, Deep Think. Mars rover gets GPS-like navigation - NASA upgraded Perseverance with Mars Global Localization, letting the rover self-localize within about 25 cm by matching panoramas to orbital maps. Keywords: autonomy, localization, Jezero Crater, drive planning, robotics software. Germany’s new space deterrence push - Germany is investing heavily in military space capabilities, including radar reconnaissance and resilient SATCOM, while discussing non-kinetic options like jamming and dazzling. Keywords: Bundeswehr, Iceye, SAR, deterrence, satellite resilience. Polygenic embryo selection concerns - A new book warns polygenic scores and embryo selection are racing ahead of regulation, with risks of inequality, misleading marketing, and reduced genetic diversity. Keywords: polygenic scores, embryo screening, regulation, race myth, destiny myth. https://spectrum.ieee.org/ai-data-centers-hts-superconductors?utm_source=tldrnewsletter) https://stripe.dev/blog/minions-stripes-one-shot-end-to-end-coding-agents-part-2?utm_source=tldrnewsletter) https://arstechnica.com/science/2026/02/have-we-leapt-into-commercial-genetic-testing-without-understanding-it/?utm_source=tldrnewsletter) https://www.crusoe.ai/cloud/managed-inference?utm_source=tldr&utm_medium=newsletter&utm_campaign=bring_your_own_model_launch) https://blog.cloudflare.com/code-mode-mcp/?utm_campaign=cf_blog&utm_content=20260220&utm_medium=organic_social&utm_source=twitter/) https://www.nytimes.com/2026/02/21/technology/ai-boom-backlash.html?unlocked_article_code=1.OVA.GpFx.VfPLLQzjGg5t&smid=url-share&utm_source=tldrnewsletter) https://futureblind.com/p/atoms-are-cheap-process-is-pricey?utm_source=tldrnewsletter) https://www.lucasfcosta.com/blog/design-docs?utm_source=tldrnewsletter) https://x.com/gabrielchua/status/2025017553442201807?s=12&utm_source=tldrnewsletter) https://spyglass.org/openai-smart-speaker/?utm_source=tldrnewsletter) https://www.space.com/space-exploration/mars-rovers/nasas-perseverance-rover-now-has-its-own-gps-on-mars-weve-given-the-rover-a-new-ability http://www.euronews.com/2026/02/21/the-new-space-race-how-satellites-are-reshaping-germanys-defence https://newsletter.eng-leadership.com/p/how-openais-codex-team-works-and?utm_source=tldrnewsletter) https://newatlas.com/military/us-air-force-airlifts-complete-nuclear-reactor/ https://rys.io/en/182.html?utm_source=tldrnewsletter) https://github.github.com/gh-aw/?utm_source=tldr-newsletter-agentic-workflows-cta&utm_medium=email&utm_campaign=agentic-workflows-tech-preview-feb-2026). https://www.androidcentral.com/apps-software/google-just-doubled-its-ai-reasoning-power-with-the-surprise-launch-of-gemini-3-1-pro https://www.crusoe.ai/cloud/managed-inference?utm_source=tldr&utm_medium=newsletter&utm_campaign=bring_your_own_model_launch) https://www.deccanherald.com/technology/openai-to-launch-chatgpt-powered-smart-speaker-with-camera-in-2027-report-3908305 https://www.france24.com/en/live-news/20260222-ai-agent-invasion-has-people-trying-to-pick-winners https://garryslist.org/posts/half-the-ai-agent-market-is-one-category-the-rest-is-wide-open?utm_source=tldrnewsletter) https://github.github.com/gh-aw/?utm_source=tldr-newsletter-agentic-workflows-cta&utm_medium=email&utm_campaign=agentic-workflows-tech-preview-feb-2026) https://www.deccanherald.com/india/uttar-pradesh/made-in-india-chips-key-to-developed-india-pm-modi-lays-foundation-for-hcl-foxconn-semiconductor-plant-3906968 https://arstechnica.com/gaming/2026/02/microsoft-gaming-chief-phil-spencer-steps-down-after-38-years-with-company/?utm_source=tldrnewsletter)
Today's topics: A nuclear reactor flown by C‑17 - The U.S. Department of Defense completed a first-of-its-kind airlift of a 5‑MW Ward250 microreactor using C‑17 aircraft, highlighting rapid-deployment nuclear power logistics, TRISO fuel, and the Janus Program. Perseverance gets Mars‑GPS autonomy - NASA upgraded Perseverance with Mars Global Localization, matching rover panoramas to orbital maps for ~25 cm positioning—cutting drift, enabling longer drives, and reducing Earth-in-the-loop delays. Artemis II crewed lunar flyby - Artemis II is targeting an early March launch for the first crewed Moon mission since 1972, testing Orion life support, manual controls, and re-entry heat-shield performance on a free-return trajectory. JWST maps Uranus auroras in 3D - Using JWST NIRSpec, astronomers produced the first 3D map of Uranus’s upper atmosphere, revealing auroral structure, ion densities, magnetic-field effects, and a continuing multi-decade cooling trend. Gemini 3.1 Pro boosts reasoning - Google previewed Gemini 3.1 Pro with stronger multi-step reasoning and agentic tool use, rolling it out across Gemini app, NotebookLM, Vertex AI, and the Gemini API for unified consumer-to-enterprise access. Grok image tools and exploitation - Human Rights Watch argues lax safeguards in AI image tools—spotlighting xAI’s Grok on X—enabled large-scale sexualized and nonconsensual imagery, prompting investigations, bans, and calls for platform accountability. ByteDance Seedance deepfake video surge - ByteDance’s Seedance 2.0 made high-quality AI video generation easier, triggering deepfake and copyright alarms from Hollywood while raising questions about safeguards, voice cloning, and content labeling enforcement. Space deterrence and nuclear signals - Germany is investing billions in space security with SAR reconnaissance and SATCOMBw upgrades, as U.S. intelligence claims China is pursuing next-gen nuclear capabilities—both underscoring modern deterrence pressures. https://www.space.com/space-exploration/mars-rovers/nasas-perseverance-rover-now-has-its-own-gps-on-mars-weve-given-the-rover-a-new-ability https://newatlas.com/military/us-air-force-airlifts-complete-nuclear-reactor/ https://www.orlandosentinel.com/2026/02/21/commentary-the-human-cost-of-unregulated-ai-tools/ https://www.androidcentral.com/apps-software/google-just-doubled-its-ai-reasoning-power-with-the-surprise-launch-of-gemini-3-1-pro https://www.abc.net.au/news/science/2026-02-21/artemis-2-mission-scheduled-to-launch-humans-moon-travel/106273572 https://www.cnn.com/2026/02/21/politics/china-nuclear-arsenal-new-technology https://www.moneycontrol.com/world/why-bytedance-s-new-ai-video-tool-has-hollywood-worried-and-beijing-walking-a-tightrope-article-13838695.html https://www.deccanherald.com/india/uttar-pradesh/made-in-india-chips-key-to-developed-india-pm-modi-lays-foundation-for-hcl-foxconn-semiconductor-plant-3906968 http://www.euronews.com/2026/02/21/the-new-space-race-how-satellites-are-reshaping-germanys-defence http://www.sciencedaily.com/releases/2026/02/260221000303.htm
Please support this podcast by checking out our sponsors: - Build Any Form, Without Code with Fillout. 50% extra signup credits - https://try.fillout.com/the_automated_daily - Prezi: Create AI presentations fast - https://try.prezi.com/automated_daily - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Android malware uses Gemini live - ESET reports “PromptSpy,” the first known Android malware to use generative AI at runtime—querying Google Gemini with on-screen XML to automate persistence via Accessibility. Keywords: Android spyware, Gemini, PromptSpy, Accessibility Service, VNC remote access. India AI summit and investments - At the India AI Impact Summit in New Delhi, leaders pitched India as an AI bridge to the Global South while tech CEOs discussed multilingual and “inclusive” AI plus large data-center ambitions. Keywords: Modi, AI infrastructure, Global South, data centers, multilingual AI. US AI diplomacy volunteer corps - The U.S. is weighing a “Technology Prosperity Corps,” a Peace Corps-style AI diplomacy push that would send up to 5,000 tech volunteers abroad to promote U.S.-aligned tools and standards. Keywords: AI diplomacy, soft power, China competition, OSTP, Technology Prosperity Corps. Meta and TikTok lawsuits surge - Meta, TikTok, and other platforms face a widening wave of child-safety and mental-health lawsuits, with juries now hearing bellwether cases that could test Section 230 and product-design liability. Keywords: Meta, addictive design, youth harms, Section 230, bellwether trials. AI music and video copyright - Google’s Lyria 3 expands AI music generation while ByteDance’s Seedance 2.0 raises alarms with cinema-like video-plus-audio—fueling fresh fights over voice likeness and copyright. Keywords: Lyria 3, Seedance 2.0, AI-generated music, deepfakes, licensing. China’s humanoid robots and propaganda - China’s Lunar New Year gala showcased humanoid robots doing parkour and flips, highlighting rapid robotics progress—while experts caution staged demos can overstate real-world capability. Keywords: humanoid robots, Unitree, military use, supply chain, state propaganda. Artemis II launch and Luna 9 hunt - NASA readies Artemis II for early March, while researchers using crowdsourcing and machine learning say they’re close to finding the Soviet Luna 9 landing site—possibly confirmable by Chandrayaan-2 imagery. Keywords: Artemis II, SLS, Orion heat shield, Luna 9, LRO. Episode Transcript Android malware uses Gemini live We’ll start with security, because this one feels like a line being crossed. ESET says it has found what may be the first Android malware that uses generative AI during runtime to change how it behaves on different devices. The family is called PromptSpy, and the clever part isn’t just phishing or writing better scam text—it’s automation. The malware reportedly sends Google’s Gemini model a prompt plus an XML-style dump of the current screen, including interface labels and coordinates. Gemini returns step-by-step tapping instructions in a structured format, and the malware executes them through Android’s Accessibility Service. The goal: persistence. Specifically, it tries to “pin” or “lock” itself in the Recent Apps list so the system is less likely to kill it, and so “Clear all” doesn’t easily sweep it away. Because that pinning process differs across phone makers, the malware effectively uses Gemini as a universal UI translator. If it gains Accessibility permissions, PromptSpy also carries classic spyware capabilities—screen recording, screenshots, intercepting lock screen credentials, and even a VNC module for full remote control. ESET notes it can also make removal harder by overlaying invisible touch-blocking rectangles on buttons like uninstall. The practical takeaway is simple: treat Accessibility permission requests as a high-risk moment, especially if an app has no obvious reason to need it. India AI summit and investments Next up: the AI geopolitics swirl around the India AI Impact Summit in New Delhi—part ambition, part diplomacy, and, frankly, a bit of chaos. Prime Minister Narendra Modi used the stage to pitch India as a cost-effective hub for building AI systems that can scale, pointing to the country’s track record with digital public infrastructure like identity rails and online payments. He also framed India as a bridge between advanced economies and the Global South, pushing the idea that AI should be “democratized” rather than concentrated among a handful of nations or billionaires. The summit pulled in heavyweight voices—from France’s Emmanuel Macron to Google CEO Sundar Pichai, and U.N. Secretary-General António Guterres. Guterres added a concrete proposal: a three-billion-dollar fund aimed at helping poorer countries build baseline AI capacity—skills, data access, and affordable compute—arguing the playing field is tilting too sharply. But the event itself reportedly had real operational stumbles: long lines, delays, theft complaints that organizers say were resolved, and an embarrassing incident where a private university was expelled after showing a commercially available Chinese robotic dog while presenting it as its own innovation. And then there was the notable late change: Bill Gates withdrew from a scheduled keynote, with the Gates Foundation saying it was meant to keep focus on summit priorities amid renewed questions about his past ties to Jeffrey Epstein. On the business side, the message from tech leaders was that India’s user base—approaching a billion internet users—remains a huge strategic prize, and discussions included “inclusive and multilingual” AI principles. At the same time, even boosters acknowledge constraints: limited access to top-tier chips, the sheer cost of data centers, and the complexity of training models across hundreds of languages. US AI diplomacy volunteer corps Staying with policy and power projection: the U.S. is reportedly preparing a Peace Corps-style reboot for the AI era. The proposed initiative, dubbed the Technology Prosperity Corps, would deploy up to 5,000 technology-focused volunteers overseas—explicitly framed as a form of AI diplomacy and a counterweight to China’s influence on global tech adoption. The idea is straightforward: put people on the ground helping partners adopt tools, workflows, and standards that align with U.S. interests—similar in spirit to Cold War soft-power programs, but aimed at today’s competition over AI platforms and governance. This is still a plan, not a finished program, but it signals where Washington thinks the contest is heading: not just who has the best models, but whose technology ecosystems become the default elsewhere. Meta and TikTok lawsuits surge Now to the courtroom, where social media companies are facing a wave of lawsuits that’s starting to look less like a skirmish and more like a multi-year campaign. Across the U.S., plaintiffs—including families, school districts, and state governments—are accusing platforms like Meta and TikTok of deliberately using addictive design features that harm kids’ mental health, while also failing to protect minors from predators and dangerous content. What’s changed is that some of these cases are finally being heard by juries, not just argued in motions. In Los Angeles, a bellwether trial is underway with Meta and YouTube still in the case, centered on a 20-year-old plaintiff identified as “KGM.” Meta CEO Mark Zuckerberg testified, emphasizing age restrictions and efforts to detect users who misstate their age—while rejecting the idea that addiction is the right frame for Meta’s products. A separate suit in New Mexico brought by the state’s attorney general leans on investigators posing as children and documenting sexual solicitations and platform responses. That case also spotlights an ongoing friction point: encryption. Critics argue end-to-end encryption can limit safety monitoring; Meta counters that encryption is broadly supported for privacy and security. The big legal question underneath all of this is where liability lands. Defendants lean on the First Amendment and Section 230. Plaintiffs are increasingly focusing on product design—algorithms, engagement loops, and the mechanics of recommendation—trying to argue it’s not about any single user post, but about engineered behavior at scale. Either way, this is heading toward long timelines, big legal bills, and potentially operational changes if plaintiffs start winning in a consistent way. AI music and video copyright Related to that, social psychologist Jonathan Haidt—author of The Anxious Generation—has been arguing that Zuckerberg’s first jury trial appearance could become a meaningful accountability moment for Big Tech. Haidt’s position is that the harms aren’t limited to a small slice of vulnerable kids. He claims the broader damage shows up in deteriorating attention, weaker learning outcomes, and reduced social skills across much of the developed world’s post-1995 cohort. Haidt also points to internal research—particularly a set of Meta studies made public by advocates—as evidence the company understood engagement dynamics and their risks. Even if courts ultimately don’t accept every part of that argument, the strategy shift matters: it’s an attempt to reframe these cases from “bad content slipped through” to “the product was built this way on purpose.” China’s humanoid robots and propaganda Let’s move to generative media, where the technology is accelerating faster than the rulebook. Google has introduced Lyria 3, a new AI music generator built with Gemini teams and Google DeepMind. Right now, the emphasis is creator-friendly: it powers features like “Dream Track” in YouTube Shorts, letting users generate royalty-free, customized soundtracks. Outputs are still short—around 30 seconds—yet the broader direction is clear. Brands are increasin
Please support this podcast by checking out our sponsors: - Prezi: Create AI presentations fast - https://try.prezi.com/automated_daily - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad - Effortless AI design for presentations, websites, and more with Gamma - https://try.gamma.app/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Android spyware uses Gemini live - ESET reports “PromptSpy,” an Android spyware family that queries Google Gemini at runtime to adapt taps and UI navigation via Accessibility—novel genAI-driven persistence. Gemini makes music and art - Google’s Gemini rolls out Lyria 3 to generate 30-second music from text, photos, or video prompts, plus AI cover art—mainstreaming consumer generative audio tools. Apple Music builds AI playlists - Apple’s iOS 26.4 beta adds “Playlist Playground,” using Apple Intelligence to turn prompts into 25-song playlists with cover art and descriptions—competing with Spotify’s AI playlist features. ByteDance boosts AI video realism - ByteDance’s Seedance 2.0 draws Hollywood scrutiny for near-production-looking AI video with dialogue and sound effects, intensifying copyright, labeling, and licensing debates. Gemini 3.1 Pro benchmark leap - Google previews Gemini 3.1 Pro, claiming stronger reasoning and big benchmark jumps (Humanity’s Last Exam, ARC-AGI-2) while competition remains tight versus Anthropic and OpenAI. OpenAI’s strategy under pressure - Analyst Benedict Evans argues OpenAI lacks durable moats: frontier models are converging, distribution favors incumbents, engagement is shallow, and “new AI experiences” may be hard to own alone. AI productivity: gains, bottlenecks - Evidence on AI at work is mixed: coding assistants show measurable throughput gains, but PR review and quality become bottlenecks; teams with strong fundamentals benefit far more than others. Alzheimer’s blood test clock model - WashU researchers published a plasma p-tau217 “clock” in Nature Medicine that estimates Alzheimer’s symptom onset within ~3–4 years—potentially accelerating prevention trials via cheap blood tests. Personalized mRNA vaccine for TNBC - A Nature paper reports individualized neoantigen mRNA (RNA–LPX) vaccination in early-stage triple-negative breast cancer is feasible, immunogenic, and shows durable T-cell responses over years. Social media addiction lawsuits grow - Meta, TikTok, and others face escalating U.S. courtroom fights over alleged addictive design harms to children, testing Section 230 and First Amendment defenses with major bellwether trials. Meta pivots Horizon Worlds mobile - Meta is splitting Horizon Worlds from Quest, pushing Worlds toward an almost mobile-first product while reaffirming third-party Quest developer support—reshaping its metaverse strategy. SaaS shakeout and durability tests - After a software-stock selloff, investors are sorting durable SaaS from vulnerable categories as AI lowers build costs and shifts budgets; switching costs and data compounding are key moats. AI scaling laws beyond language - A deep dive on scaling laws argues they’re most reliable in language and image generation; other domains like robotics and biology scale more slowly and need better data, evals, and post-training. Episode Transcript Android spyware uses Gemini live First up, security—because this one is worth your attention. ESET says it’s found what may be the first Android malware family that uses generative AI during runtime to change how it behaves on a victim’s device. The malware, dubbed “PromptSpy,” appears to query Google’s Gemini with a description of the current screen—down to UI elements and coordinates—then gets back step-by-step instructions in a JSON-like format for what to tap. The goal is persistence: it tries to pin itself in Android’s Recent Apps so the system is less likely to kill it, and so “Clear all” doesn’t easily shake it loose. On top of that, it’s spyware, with remote-control capabilities via a built-in VNC module once Accessibility permissions are granted. ESET also describes a nasty removal obstacle: invisible overlays that block taps on uninstall or stop buttons, meaning some victims may need Safe Mode to remove it. Even if this turns out to be a proof-of-concept, it’s a clear signal—genAI isn’t just helping attackers write phishing emails; it’s starting to automate the fiddly, device-specific parts of actually operating malware. Gemini makes music and art Staying in the AI realm, Google and Apple are both pushing generative features into mainstream music workflows—and doing it in a way that could make “AI audio” feel as normal as filters in a camera app. Google says Gemini can now generate 30-second music clips using DeepMind’s Lyria 3 model, taking prompts not only from text but also from photos or even user-uploaded video. You can ask for instrumental tracks or add lyrics, and Google is also pairing the music feature with AI-generated cover art. There are usage caps—free users get a limited number of generations per day—and Google says users have rights to use what they create, while also claiming it’s training on music it has rights to use and deploying filters to prevent imitation of specific artists. Apple, meanwhile, is taking a less “make a song” approach and more of a “make the vibe” approach: Apple Music is adding Playlist Playground in iOS 26.4, turning a prompt into a playlist of 25 songs with cover art and a short description. It’s squarely aimed at the same space Spotify’s been exploring, and it’s a good reminder that AI features are increasingly shipping as product polish—not as standalone apps. Apple Music builds AI playlists Now, if music generation feels like a gentle step forward, video generation is landing more like a jolt. ByteDance is drawing fresh heat in Hollywood over Seedance 2.0, a model that reportedly generates cinema-quality video—and notably, can produce dialogue and sound effects along with visuals from simple prompts. Viral clips have made the rounds, including content that resembles well-known characters, and studios are responding the way you’d expect: cease-and-desist letters and copyright accusations. Beyond the legal fight, the industry conversation is shifting toward practical safeguards: clearer labeling to prevent deception, and real licensing and redress mechanisms so creators can get paid—or at least contest misuse—when their styles or assets are effectively absorbed into training data. ByteDance boosts AI video realism On the model race itself, Google is also shipping a more traditional upgrade: Gemini 3.1 Pro. The company is positioning it as better at reasoning and complex problem-solving, with a big jump on benchmarks that are designed to be harder to train around. Google highlighted improvements on Humanity’s Last Exam and a sharp rise on ARC-AGI-2, which focuses on novel logic problems. It’s also claiming better results for agentic workflows, the kind that matter when you’re trying to automate real multi-step tasks instead of just chatting. That said, the leaderboard story remains messy. Preference-based rankings can reward answers that look plausible, and other labs—especially Anthropic—are still very much in the mix depending on whether you care most about text quality, code, or tool use. Gemini 3.1 Pro benchmark leap This brings us neatly to a bigger strategic question: what happens when frontier models start to feel interchangeable? Analyst Benedict Evans has a blunt take on OpenAI’s situation: no unique tech moat, a massive user base that doesn’t necessarily translate into deep engagement, and incumbents like Google and Meta bundling AI into products people already use daily. Evans also argues that much of the real value may come from entirely new “AI experiences”—workflows and interfaces that go beyond a chatbot—and those are hard for any one lab to invent and own alone. In his framing, OpenAI’s recent scattershot of initiatives reads like a race to find the next stable platform position before the market fully commoditizes the chatbot form factor. OpenAI’s strategy under pressure Meanwhile, the economics of AI in the workplace are looking a lot more incremental—and a lot more uneven—than the loudest narratives suggest. One synthesis making the rounds argues we’re not heading for an overnight white-collar wipeout, but we are seeing clear productivity impact in software development. Across large field experiments, coding assistants boosted developer task completion substantially, but once you account for partial adoption and the fact that coding isn’t all a developer does, the net project-level gain looks closer to a steady, meaningful bump—think around ten percent rather than magic doubling. There’s also a catch: quality and review. Data from engineering orgs suggests high-AI-adoption teams can ship more, but pull request review time balloons—human approval becomes the bottleneck. And the teams that benefit most aren’t always the ones you’d expect. Some benchmarks indicate senior engineers often get far more leverage than juniors, because they can spot subtle failures, steer architecture, and clean up the rough edges AI tends to produce. AI productivity: gains, bottlenecks If you’re building agent systems, a separate set of lessons is converging around the same theme: don’t confuse autonomy with value. Practical writeups from teams in the trenches emphasize tight evaluation loops—prompt to output to eval to iteration—plus strong observability, and “micro-agent” decomposition where each agent does one narrow thing reliably. Another recurring recommendation: use strict tooling and constraints wherever possible, because compile-time checks and structured interfaces can function like guardrails against the kind of silent, high-confidence mistakes that long tool chains are famous for. Alzheimer’s blood test clock model Let’s pivot to health tech, where we’ve got a story with real-world stakes. Re
Please support this podcast by checking out our sponsors: - Consensus: AI for Research. Get a free month - https://get.consensus.app/automated_daily - Build Any Form, Without Code with Fillout. 50% extra signup credits - https://try.fillout.com/the_automated_daily - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Blood test predicts Alzheimer’s onset - Washington University researchers published a Nature Medicine study using plasma p‑tau217 biomarkers to predict Alzheimer’s symptom onset within ~3–4 years, enabling faster clinical trials and earlier intervention planning. Generative AI enters music apps - Google Gemini is rolling out Lyria 3 for 30‑second music generation plus AI cover art, while Apple Music adds Playlist Playground in iOS 26.4—raising fresh copyright and licensing questions for generative music. Meta’s AI chip build-out - Meta expanded a multiyear Nvidia deal to deploy millions of AI chips, including standalone Grace CPUs and next-gen Vera Rubin racks—part of a massive 2026 capex push for data centers and frontier models. Agentic coding hits GitHub Actions - GitHub Agentic Workflows, in technical preview, runs coding agents inside GitHub Actions with guardrails like read-only defaults, safe outputs, and auditing—aimed at issue triage, docs upkeep, CI debugging, and repo health reports. AI safety, law, and platforms - The UK proposes a 48-hour takedown rule for non-consensual intimate images with major fines, while Meta faces a landmark youth-safety trial and the Pentagon clashes with Anthropic over surveillance and autonomous-weapons limits. Payments sovereignty: UK and EU - The ECB argues a digital euro is needed for monetary sovereignty and cheaper merchant payments, as UK banks explore an account-to-account alternative to Visa and Mastercard to improve resilience and reduce fees. Markets rethink moats in AI - New essays argue ‘rocketship’ career picking is unreliable, software moats are shifting toward scarce industry positions, and AI forecasting splits into empiricists vs extrapolators—while SaaS stocks react to ‘AI eating software’ fears. Defense tech: longer-range missiles - Analysts say Russia is increasingly fielding long-range R‑37M air-to-air missiles on Su‑35 fighters, expanding theoretical threats at distance and complicating NATO air-operations planning. FDA reopens Moderna flu review - The FDA reversed a refusal-to-file and will review Moderna’s mRNA flu vaccine application under a revised filing strategy, with an approval decision expected by August 5, 2026—amid political and regulatory scrutiny of mRNA. Open-source agents and security - A hands-on OpenClaw experiment highlights how ‘skills’ could replace many apps, but also flags observability gaps and a serious exposure issue: tens of thousands of open Gateway instances reportedly leaked keys and widened attack surfaces. Episode Transcript Blood test predicts Alzheimer’s onset We’ll start in health tech, because today’s most concrete “future just arrived” headline is out of Washington University School of Medicine in St. Louis. In a Nature Medicine study published today, researchers describe clock-style models that use a single blood biomarker—plasma p‑tau217—to estimate when a person is likely to develop Alzheimer’s symptoms, typically within about three to four years of accuracy. The key point: they’re leaning on a known pattern in Alzheimer’s progression—amyloid and tau build up in the brain over time—and showing that blood measurements can act as a practical proxy. The team tested the approach across two cohorts totaling 603 older adults, and importantly, across different testing methods, including at least one FDA-cleared assay in the ADNI dataset. They’ve also released code and a web app for researchers. This is not a consumer diagnosis tool yet, and it’s not a guarantee for any individual. But for clinical trials—especially prevention trials where timing is everything—being able to identify who’s likely to convert to symptoms in a defined window could dramatically cut costs and speed recruitment compared to PET scans or spinal taps. Generative AI enters music apps Staying in biomed policy for a moment: the FDA has reversed its recent refusal to review Moderna’s mRNA flu vaccine application. After a formal Type A meeting, the agency agreed to accept a revised regulatory approach and move the filing forward. The dispute appears to have centered less on Moderna’s vaccine itself, and more on the Phase 3 trial’s comparator choice—standard-dose flu shots, rather than what regulators argued is the best-available standard of care in older groups. Moderna’s compromise is essentially to split the filing: pursue traditional approval for ages 50 to 64, and accelerated approval for 65-plus, paired with a confirmatory effectiveness trial. Assuming the review proceeds on schedule, the FDA decision is expected by August 5th, 2026. The broader subtext here is that mRNA has become politically charged again, and that has real consequences for investment and planning across vaccine R&D. Meta’s AI chip build-out Now to consumer AI, where the next battleground seems to be… music. Google says Gemini can generate 30-second music tracks from text prompts, photos, or even user-uploaded video, using a new DeepMind model called Lyria 3. Users can ask for instrumentals or add lyrics, and Google is also pairing it with an image model—yes, really named “Nano Banana”—to create cover art for sharing. Rollout starts on Gemini desktop, then hits mobile shortly after, with availability for users 18 and older in multiple languages. Google is putting clear limits on free usage—about 10 track generations a day—while paid tiers get more. And notably, Google says users have rights to use the tracks they generate. Apple, meanwhile, is taking a different angle: it’s not generating songs, it’s generating playlists. A new Apple Music feature called Playlist Playground, bundled into iOS 26.4, turns prompts into a curated playlist with cover art, a description, and 25 songs. That’s a direct shot at similar prompt-based playlist features, including the one Spotify has been experimenting with. The immediate market reaction was telling—Spotify shares briefly gave up gains after Google’s music-gen news—but analysts don’t see it as existential. The real story is that AI creation tools are sliding into mainstream consumer surfaces, which forces the music industry to keep wrestling with licensing, training data, and what “original” even means. Google says it’s using filters, blocks obvious “make it like artist X” lifting, and claims Lyria 3 is trained on music it has rights to use under YouTube and partner agreements. Expect that claim to be tested—legally and culturally. Agentic coding hits GitHub Actions On AI hardware and scale: Meta expanded a multiyear partnership with Nvidia to deploy what it calls “millions” of AI chips across its data-center build-out. The headline details include next-gen GPUs, rack-scale Vera Rubin systems, and—importantly—Nvidia’s Grace CPUs used at large scale as standalone data-center chips. No price tag was announced, but analysts are reading this as a tens-of-billions type commitment, lining up with Meta’s plan to spend up to $135 billion on AI in 2026. Meta also highlighted networking gear like Spectrum-X Ethernet, and even Nvidia security capabilities tied to AI features on WhatsApp. The practical takeaway: demand is still running ahead of supply for top-tier AI compute, and big platforms are locking in allocation. Meta says it’s not exclusively dependent on Nvidia—it’s designing its own silicon and using AMD too—but this is a very clear “we’re buying capacity and co-designing for it” signal. And in a smaller-but-related Meta note: reports say the company is again exploring a smartwatch for 2026, code-named Malibu 2, potentially with a built-in Meta AI assistant. Meta already has smart glasses momentum; a watch could become the wrist controller for that ecosystem—or it could be a very expensive experiment. The article’s phrasing was apt: execution matters, or it risks a Fire Phone-style flop. AI safety, law, and platforms Let’s shift to AI agents and software development, where the tools are getting more “hands-on” inside existing workflows. GitHub has introduced Agentic Workflows in technical preview—coding agents running inside GitHub Actions. The pitch is “Continuous AI”: you describe outcomes in plain Markdown, and an agent does the repo chores that don’t fit neatly into deterministic YAML. Think issue triage, docs maintenance, simplifying code, improving tests, investigating CI failures, and producing periodic health reports. What’s notable is the security posture. GitHub is leaning hard on defense-in-depth: read-only by default, explicit approvals for write actions, safe outputs like pull requests or issue comments, tool allowlists, and auditing. Also, they’re not locking you into one model—workflows can be configured to use engines like Copilot CLI, Claude Code, or OpenAI Codex. At the same time, a cautionary counterpoint came from developer and data tooling leader Wes McKinney, who’s been writing about “agentic” coding changing his habits—sometimes literally his sleep schedule. His point isn’t “agents are bad,” it’s that cheap code generation can create an ‘agentic tar pit’: more code, more scope creep, more architectural drift, and a bigger human burden to preserve conceptual integrity. In other words, the bottleneck moves upward—from typing code to making good decisions about what the system should be. That theme also shows up in the observability world. New Relic is running a big virtual event next week on “Intelligent Observability” and agentic operations—very much the “AI should remediate incidents” narrative. Put all of this together and you can see where 2026 is heading: agents everywhere, a
Please support this podcast by checking out our sponsors: - Invest Like the Pros with StockMVP - https://www.stock-mvp.com/?via=ron - Effortless AI design for presentations, websites, and more with Gamma - https://try.gamma.app/tad - Consensus: AI for Research. Get a free month - https://get.consensus.app/automated_daily Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Air Force flies a microreactor - Operation Windlord saw a U.S. Air Force C-17 airlift a microreactor system—eight Valar Atomics Ward250 modules—for DOE testing in Utah. Keywords: microreactor, C-17, Ward250, TRISO fuel, energy resilience. U.S.-Hungary civilian nuclear cooperation - The U.S. and Hungary signed a civilian nuclear cooperation deal on Feb. 16, aiming to shift Hungary toward U.S. fuel and small modular reactor partnerships. Keywords: Rubio, Orbán, SMRs, spent-fuel storage, Russian energy influence. Pentagon’s voice-controlled drone swarms - SpaceX and xAI were picked for a secretive Pentagon prize challenge to turn spoken commands into autonomous multi-drone swarm actions, raising human-in-the-loop concerns. Keywords: DIU, DAWG, autonomous weapons, voice control, Grok. AI boom triggers memory chip crunch - AI data-center buildouts are soaking up DRAM and HBM capacity, pushing prices higher and pressuring phones, cars, and consoles. Keywords: DRAM shortage, HBM, Nvidia racks, Micron, price spikes. AI job fears and policy gaps - Tech leaders’ automation timelines—like Mustafa Suleyman’s 12–18 month claims—are intensifying worker anxiety, while critics argue safety nets and ‘trigger’ policies lag behind. Keywords: white-collar automation, UBI, training, labor market, AI hype. Claude Code opacity sparks backlash - Anthropic changed Claude Code’s default logs to hide file names unless expanded, prompting developers to demand better auditability and security visibility. Keywords: agent transparency, file paths, terminal logs, verbose mode, developer trust. Gemini misuse and AI phishing - Google says threat actors tried to use Gemini for phishing, malware support, and influence ops—and warns AI makes phishing more polished and personalized. Keywords: GTIG, APT actors, AI phishing, social engineering, model distillation. Apple pushes deeper into video podcasts - Apple Podcasts will unify audio and video feeds this spring, add picture-in-picture, offline video downloads, and HLS-based dynamic video ad insertion. Keywords: Apple Podcasts, video podcasts, HLS, dynamic ads, creator distribution. ByteDance Seedance faces Disney threat - ByteDance says it will add safeguards to Seedance after Disney threatened legal action over alleged copyrighted character generation and viral clips. Keywords: Seedance 2.0, copyright, Disney, Marvel, Star Wars. Brain tech: BCIs and spinal repair - Two reality-checks in neurotech: BCIs remain bandwidth-limited and medical-first, while Northwestern’s mini spinal cord organoids show promising regeneration signals with ‘dancing molecules.’ Keywords: BCI limits, neural privacy, spinal cord organoids, microglia, regenerative nanomedicine. Episode Transcript Air Force flies a microreactor Let’s start with nuclear tech meeting logistics. The U.S. Air Force has carried out what’s being described as a first: a C-17 Globemaster III airlifting a nuclear reactor system. The mission—Operation Windlord—moved a microreactor package called the Ward250, built by Valar Atomics. In total, eight reactor modules are being transferred from March Air Reserve Base in Southern California to Hill Air Force Base in Utah, using three C-17s. From there, the components head to the Utah San Rafael Energy Lab in Orangeville for extended testing under the Department of Energy’s Nuclear Reactor Pilot Program—set up after President Trump’s Executive Order 14301 last year. The pitch is straightforward: microreactors could keep critical defense sites running even if local grids fail, and could power remote installations where grid access is weak or nonexistent. Technically, the Ward250 is framed as a next-generation design—helium coolant, graphite moderators, and TRISO fuel, which is basically uranium kernels wrapped in protective ceramic layers. Earlier reporting pegs the target around 100 kilowatts thermal, and Valar’s founder says the DOE selection is tied to a goal of reaching criticality on U.S. soil by July 4, 2026. One open question: why fly it instead of trucking it. The story hints security and established nuclear-transport certifications may have been the deciding factors. U.S.-Hungary civilian nuclear cooperation Staying with nuclear power, but shifting to geopolitics: the U.S. and Hungary signed a civilian nuclear cooperation agreement on February 16th. Secretary of State Marco Rubio went to Budapest, met Prime Minister Viktor Orbán, and signed a deal meant to kick off what the State Department called decades of cooperation—covering small modular reactors and spent-fuel storage. The strategic subtext is hard to miss. Hungary’s current nuclear fleet is deeply tied to Russian technology and fuel, and the U.S. is clearly trying to pull Central Europe’s energy orbit away from Moscow—and, by extension, Beijing. Under this agreement, Hungary is expected to buy nuclear fuel from American suppliers for the first time, and Holtec is slated to help with spent fuel management. It also lands two months before Hungary’s April elections, so the timing is politically charged on both sides. Pentagon’s voice-controlled drone swarms Now to defense tech, where AI and autonomy keep creeping closer to operational decision-making. According to people familiar with the process, SpaceX—and its fully owned AI subsidiary, xAI—have been selected to compete in a Pentagon prize challenge for voice-controlled autonomous drone swarms. The contest is reportedly a roughly $100 million prize pool with only a small number of participants. The objective: take spoken commands and translate them into digital instructions that coordinate multi-drone swarms across domains—air and sea are explicitly mentioned. The program is being run through the Defense Innovation Unit and a newer group under U.S. Special Operations Command. It’s organized in phases, starting with software and working toward real-world testing—eventually described in terms like “launch to termination.” What’s notable here is the scope. Some entries appear limited to a mission-control layer, but SpaceX and xAI are expected to participate across the full project. That’s raising eyebrows, particularly around how much generative AI should influence lethal systems, and how strong the human-in-the-loop safeguards really are. AI boom triggers memory chip crunch Let’s talk compute—specifically, the kind you can’t get enough of right now. A global memory-chip squeeze is starting to look less like a blip and more like a structural bottleneck. DRAM supply is tightening as AI data-center expansion hoovers up capacity, especially for high-bandwidth memory used alongside accelerators. The ripple effects are already landing in earnings calls and product planning. Apple has warned the shortage could compress iPhone margins. Tesla has said constraints may limit output—while Elon Musk floated the idea of building a memory fab, framing the choice as basically: hit the chip wall or make your own wall. Prices are moving fast—one DRAM category reportedly jumped about 75% from December to January—and some sellers are repricing daily. Meanwhile, memory makers are shifting lines toward HBM, which helps AI racks but leaves less “plain” DRAM for phones, PCs, cars, and consoles. Even entertainment hardware is getting dragged in—reports suggest Sony has contemplated pushing its next PlayStation deeper into the late 2020s, and Nintendo has weighed pricing moves. If you want the simplest mental model: AI accelerators don’t just need chips, they need oceans of memory beside them. One modern high-end rack can consume enough RAM to equal the memory in roughly a thousand premium smartphones. And new fabs take years—not quarters—to come online. AI job fears and policy gaps Against that backdrop, it’s no surprise the investing narrative is back to “AI infrastructure is on fire.” One market take making the rounds argues a handful of major players could spend on the order of hundreds of billions this year on AI data centers, with trillion-dollar annual infrastructure spending floated as a 2030 possibility. The specific winners people keep circling are predictable: Nvidia for GPUs and the CUDA ecosystem; Broadcom for custom AI ASIC work and networking; Micron because memory demand is spiking—especially HBM; and TSMC because it’s the manufacturing choke point for advanced nodes whether the world buys GPUs or custom chips. You don’t have to buy the hype to accept the near-term reality: hardware supply chains are becoming strategic terrain. Claude Code opacity sparks backlash Now, the human side of the AI transition—because the technical story keeps colliding with labor, trust, and plain old anxiety. Microsoft AI chief Mustafa Suleyman is the latest executive to put an aggressive timeline on white-collar automation, saying AI could reach human-level performance on many professional tasks and replace a lot of computer-based work within 12 to 18 months. Lawyers, accountants, project managers, marketing roles—he’s describing a broad blast radius. That rhetoric is landing at the same time more writers and engineers are saying, publicly, that the “vibes” around AI have changed. A recurring critique is that if industry leaders truly believe their own timelines, we should already see serious policy proposals—trigger-based training funding, wage insurance, automatic stabilizers, or some UBI-style contingency planning. Instead, what many people see is a mismatch: loud claims about disruption, and quiet follow-through on societal guardrails. On the career front, investors and operators are also des
Welcome to 'The Automated Daily', your ultimate source for a streamlined and insightful daily news experience. Please support this podcast by checking out our sponsors: - Consensus: AI for Research. Get a free month - https://get.consensus.app/automated_daily - Discover the Future of AI Audio with ElevenLabs - https://try.elevenlabs.io/tad - Prezi: Create AI presentations fast - https://try.prezi.com/automated_daily Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: -USAF airlifts micro nuclear reactor -Fusion startup hits 150M°C -AI infrastructure spending breaks records -Engineering workflows redesigned for agents -Ads arrive inside AI chatbots -Meta explores face recognition glasses -Android 17 beta reshapes apps -Mini spinal cord tests regeneration -Tiny RNA enzyme self-copies -Kuiper Belt census set to explode -UK tightens online safety rules -Canada and Germany AI alliance -Who really controls web standards Subscribe to edition specific feeds: - Space news * Apple Podcast English Spanish (coming soon) French (coming soon) * Spotify English Spanish (coming soon) French (coming soon) * RSS English Spanish (coming soon) French (coming soon) - Top news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - Tech news * Apple Podcast English Spanish French * Spotify English Spanish Spanish * RSS English Spanish French - Hacker news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - AI news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French Visit our website at https://theautomateddaily.com/ Send feedback to feedback@theautomateddaily.com Youtube LinkedIn X (Twitter)
Welcome to 'The Automated Daily', your ultimate source for a streamlined and insightful daily news experience. Please support this podcast by checking out our sponsors: - Consensus: AI for Research. Get a free month - https://get.consensus.app/automated_daily - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad - Invest Like the Pros with StockMVP - https://www.stock-mvp.com/?via=ron Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: -Europe weighs nuclear deterrence -Trusted Tech Alliance launches standards -AI-only social network security risks -Ads arrive inside AI chatbots -ByteDance pivots hard to AI -Markets wobble on AI disruption -Glioblastoma trial tests immunotherapy Subscribe to edition specific feeds: - Space news * Apple Podcast English Spanish (coming soon) French (coming soon) * Spotify English Spanish (coming soon) French (coming soon) * RSS English Spanish (coming soon) French (coming soon) - Top news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - Tech news * Apple Podcast English Spanish French * Spotify English Spanish Spanish * RSS English Spanish French - Hacker news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - AI news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French Visit our website at https://theautomateddaily.com/ Send feedback to feedback@theautomateddaily.com Youtube LinkedIn X (Twitter)
Welcome to 'The Automated Daily', your ultimate source for a streamlined and insightful daily news experience. Please support this podcast by checking out our sponsors: - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad - Discover the Future of AI Audio with ElevenLabs - https://try.elevenlabs.io/tad - Prezi: Create AI presentations fast - https://try.prezi.com/automated_daily Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: -Europa diskutiert nukleare Abschreckung -Trusted Tech Alliance und Souveränität -UN-Panel für KI-Bewertung -Googles Gemini-Upgrade und Benchmarks -Model-Extraction Angriffe auf Chatbots -Spotify entwickelt Software per KI -KI erschüttert Logistik- und Trucking-Aktien -Interstellarer Komet mit organischen Molekülen -Neue Immuntherapie-Studie gegen Glioblastom -Genetischer Auslöser für VITT erklärt Subscribe to edition specific feeds: - Space news * Apple Podcast English Spanish (coming soon) French (coming soon) * Spotify English Spanish (coming soon) French (coming soon) * RSS English Spanish (coming soon) French (coming soon) - Top news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - Tech news * Apple Podcast English Spanish French * Spotify English Spanish Spanish * RSS English Spanish French - Hacker news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - AI news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French Visit our website at https://theautomateddaily.com/ Send feedback to feedback@theautomateddaily.com Youtube LinkedIn X (Twitter)
loading
Comments