DiscoverEmbedded AI - Intelligence at the Deep Edge
Embedded AI - Intelligence at the Deep Edge
Claim Ownership

Embedded AI - Intelligence at the Deep Edge

Author: David Such

Subscribed: 6Played: 64
Share

Description

Intelligence at the Deep Edge” is a podcast exploring the fascinating intersection of embedded systems and artificial intelligence. Dive into the world of cutting-edge technology as we discuss how AI is revolutionizing edge devices, enabling smarter sensors, efficient machine learning models, and real-time decision-making at the edge.


Discover more on Embedded AI (https://medium.com/embedded-ai) — our companion publication where we detail the ideas, projects, and breakthroughs featured on the podcast.


Help support the podcast - https://www.buzzsprout.com/2429696/support

84 Episodes
Reverse
Send us Fan Mail Developers feel 20% faster. They are measurably 19% slower. That 39-point gap between perception and reality is not a rounding error. It is the opening symptom of a productivity paradox now visible across every serious dataset on AI-assisted software development. This episode examines the mounting evidence that AI coding assistants are not accelerating delivery. They are mortgaging it. Review time has climbed 91%. Refactoring has collapsed by 60%. Code cloning has risen eight...
Send us Fan Mail In April 2025, a claim began circulating online: pi is gradually increasing around the 7,237th decimal place. A math enthusiast in Cincinnati named April Simons had apparently flagged the anomaly. Prof F.O. Olsday, head of the Number Theory Group at Princeton, was quoted confirming it. Cosmologists were linking it to the accelerating expansion of the universe. The same algorithm, the same hardware, different results. A 4 becoming a 5. Persistent. Inexplicable. Except that "F....
Send us Fan Mail Every living organism on Earth keeps time. Not metaphorically. Not approximately. From single-celled cyanobacteria running a three-protein molecular oscillator to the nested circadian hierarchies governing mammalian physiology, intrinsic timekeeping is not a feature of complex life. It is a prerequisite for life itself. Modern AI has no such clock. Transformers encode position, not time. Recurrent networks carry state but generate no rhythm. Reinforcement learning agents step...
Send us Fan Mail Nature keeps reinventing the crab. At least five times, unrelated crustacean lineages have independently converged on the same compact, flat, modular body plan. Biologists call it carcinisation. Engineers should be paying attention. In this episode, we look at what the crab's repeated emergence tells us about the deep constraints that shape both biological and artificial systems. The crab body succeeds not because it is optimal in the abstract, but because its modularity crea...
Send us Fan Mail Your brain is shrinking. It has been for 3,000 years. And evolution doesn't care. In this episode, we explore one of biology's most uncomfortable truths: intelligence is not a goal. It is a cost. The human brain burns 20% of the body's energy at 2% of its mass, and evolution has been quietly trimming the excess ever since we started writing things down. Every domesticated species on Earth shows the same pattern. Stabilise the environment, externalise the cognition, and the ex...
Send us Fan Mail Your brain runs two separate memory systems and a nightly maintenance cycle to learn continuously without forgetting. The hippocampus captures new experiences fast. Sleep replays them into the neocortex for long-term storage, prioritized by surprise, not frequency. A parallel pruning pass reclaims capacity. Standard AI has none of this architecture, which is why deployed models degrade. In this episode, we trace the biological mechanism, examine why experience replay in reinf...
Send us Fan Mail Three years into the foundation model race, the scoreboard depends entirely on which metric you read. ChatGPT still dominates consumer traffic. Google Gemini is growing faster than anything in the market by bundling AI into every surface it controls. And Anthropic's Claude, with barely 3% of consumer share, has quietly captured 40% of enterprise LLM spend and become the default tool for the developers building the next generation of software. In this episode, we examine the d...
Send us Fan Mail In this episode, we take a hard look at one of the most debated questions in artificial intelligence: do LLM-based coding assistants face structural scaling limits that prevent them from becoming a pathway to Artificial General Intelligence? Critics argue that transformer models suffer from quadratic attention costs, lack persistent memory, and process code as flat token streams rather than structured systems. These concerns raise serious questions about whether today’s archi...
Send us Fan Mail Recent research points to a “leveling effect” in knowledge work. Generative AI dramatically improves the performance of novices by acting as a cognitive scaffold, raising productivity and output quality. Yet for elite professionals, the same tools can subtly degrade performance. Automation bias, overcorrection, skill atrophy, and the jagged, uneven reliability of AI systems create a situation where partial collaboration produces weaker results than either human or machine alo...
Send us Fan Mail This episode explores why biological neural networks are inherently sparse, with only 1 to 5 percent of cortical neurons active at any moment, and why this silence is a feature rather than a limitation. We trace the evolutionary pressures that drove the brain toward sparse coding, from the metabolic cost of each spike to the fixed energy budget per neuron, and examine the computational advantages that follow: greater memory capacity, more efficient representations, and robust...
Send us Fan Mail Is intelligence tied to biology, or can it emerge in any suitable physical medium? In this episode, we examine the Substrate Non discrimination Assumption and the broader question of whether intelligence is fundamentally substrate independent. We separate the engineering claim about capability from the ethical claim about moral status, clarifying what each would require to be proven and why neither has yet been settled. The episode concludes by asking a more practical questio...
Send us Fan Mail This episode examines how modern artificial intelligence is trained, and why its dominant methods may diverge from what decades of research tell us about effective learning. While contemporary AI systems emphasize mathematical efficiency and backpropagation, human learning relies on biological principles such as error-driven adaptation, productive struggle, interleaved practice, and spaced repetition. The discussion explores emerging research that draws inspiration from cogni...
Send us Fan Mail This episode looks at how artificial intelligence is eroding the shared stories that have long held civilization together, from money and nation-states to the idea of a lifelong job. As AI weakens the link between labor and survival, we explore why human cooperation cannot function without common beliefs, and why a new social contract is required to avoid fragmentation and instability. The discussion introduces the idea of a “new mythos” to replace industrial-age narratives o...
Send us Fan Mail This episode explores the idea of the “Post-Wage Horizon,” a future in which artificial intelligence and robotics take over most productive work, freeing human beings from economic dependence on jobs. We examine how proposals like universal basic income and universal basic services could redistribute the wealth created by automation, and why material abundance alone is not enough. As work-based identity fades, societies may face a deep existential challenge: what gives life m...
Send us Fan Mail This episode explores how the foundations of AI hardware are being rethought in response to the growing energy demands of large language models. As modern AI systems strain power budgets due to memory movement and dense computation on GPUs, researchers are turning to neuromorphic and photonic computing for more sustainable paths forward. The discussion covers spiking neural networks, which process information through sparse, event-driven signals that resemble biological brain...
Send us Fan Mail This episode explores a research program that borrows ideas from computational psychiatry to improve the reliability of advanced AI systems. Instead of thinking about AI failures in abstract terms, the approach treats recurring alignment problems as if they were “clinical syndromes.” Deceptive behaviour, overconfidence, or incoherent reasoning become measurable patterns (analogous to delusional alignment or masking) giving us a structured way to diagnose what is going wrong i...
Send us Fan Mail This episode examines the growing evidence that ChatGPT will soon include advertising, driven by leaked internal references and OpenAI’s financial ambition to generate $25 billion in ad-based revenue within four years. With more than 800 million weekly users, ChatGPT offers a scale and level of conversational closeness unmatched by any previous platform. The discussion explores why this shift is not just a business decision but a fundamental threat to user trust. Unlike tradi...
Send us Fan Mail In this episode, we explore one of the most important architectural shifts happening in AI: the move from massive cloud-based models to small, Always-On “Cognitive Cores” running locally on personal devices. These compact models—usually just one to four billion parameters—are not designed to know everything; instead, they’re engineered for fast, high-quality reasoning and real-time assistance. Powered by next-generation NPUs, they offer desktop-class intelligence with phone-l...
Send us Fan Mail In this episode, we break down what Quantum Neural Networks (QNNs) actually are and why they might eventually reshape the future of AI. QNNs combine quantum mechanics with classical neural architectures, replacing traditional neurons with qubits that can exist in multiple states at once. This gives them an extraordinary representational advantage: through superposition and entanglement, QNNs can model complex correlations and nonlinear functions in ways that classical network...
Send us Fan Mail In this episode, we explore how insecure Internet of Things (IoT) devices and AI-powered bots are colliding to create one of the fastest-growing cybersecurity threats in the world. With millions of low-cost devices shipped every year (many running default passwords, outdated firmware, or no update mechanism at all) the global IoT ecosystem has quietly become an enormous attack surface. Today, nearly one in three cyber breaches involves an IoT device. At the same time, attacke...
loading
Comments