DiscoverArtificial Developer Intelligence
Artificial Developer Intelligence
Claim Ownership

Artificial Developer Intelligence

Author: Shimin Zhang, Dan Lasky, & Rahul Yadav

Subscribed: 3Played: 26
Share

Description

Three engineer friends argue about AI so you don't have to.

Shimin Zhang, Dan Lasky, and Rahul Yadav are working developers who've been watching AI transform their profession in real time, and they got opinions on the robot takeover. Every week the three get together to riff on the latest AI news, geek out over research papers, roast each other's tool choices, and occasionally have an existential crisis about whether the craft is dying or just getting weird.

What you're signing up for:
- AI news without the LinkedIn cringe: model drops, acquisitions, open-source drama, and the other stuff that actually matters if you write code for a living.
- Technique corner: real tips from the trenches: spec-driven development, multi-agent orchestration, Claude.md tricks, and all the ways they've wasted hours so you don't have to.
- Two Minutes to Midnight: the show's running AI bubble tracker, complete with circular funding diagrams, hyperscaler CAPEX math, and a doomsday clock they keep arguing about moving.
- Deep dives that (occasionally) go deep: hallucination neurons, agentic memory, workflow automation economics, LLM architectures the papers nobody else is covering because they're hard.
- Dan's Rant: Dan frequently gets mad about things. It's a whole thing.
- The feelings segment: Yes, Shimin reads Tennyson on a tech podcast. Yes, Rahul wrote an AI-generated country song. No, they're not sorry.

Three friends with strong opinions, questionable metaphors, and genuine love for the craft they're also mourning for. If you want to understand AI deeply, use it without embarrassing yourself, and laugh at the absurdity of it all, pull up a chair.
16 Episodes
Reverse
In this episode, Dan, Shimin and Rahul cover the Pentagon drama between Anthropic/OpenAI and the Department of Defense over AI usage red lines, introduces Sterling 8B — the first inherently interpretable language model — and explores verified spec-driven development (VSDD). The episode features the show's first interview, with Martin Alderson discussing which web frameworks are most token-efficient for AI agentsTakeawaysPentagon AI drama: Anthropic's contract red lines (no mass domestic surveillance, no autonomous weapons), the Department of Defense threatening to label Anthropic a supply chain risk, OpenAI swooping in with a competing contract under vague 'lawful use' terms, and Sam Altman's statementsSterling 8B by Guide Labs: first inherently interpretable LLM with concept attribution, input context tracing, and training data attribution; uses a concept head with orthogonal loss functions to create non-overlapping interpretable conceptsVerified Spec-Driven Development (VSDD): a methodology by DollSpace combining spec-driven development, TDD, and adversarial verification gates at each phase; Shimin tested it on a side project using Claude CodeInterview with Martin Alderson: web framework token efficiency experiment (19 frameworks, minimal frameworks like Flask/Express most efficient), new framework discovery in the AI age, using Open Code for CI/CD PR reviews, keeping Claude.md files updated via scheduled tasks, and building internal CLIs for agent accessTwo Minutes to Midnight: Citadel Securities report on AI adoption S-curves vs recursive improvement, Substack post that shook the S&P 500 about white collar job crisis, Block laying off 45% of workforce citing AI productivity gainsResources MentionedAnthropic and the Department of WarSam Altman's TweetOur agreement with the Department of War"All Lawful Use": Much More Than You Wanted To KnowSteerling-8B: The First Inherently Interpretable Language ModelVerified Spec-Driven Development (VSDD)Which web frameworks are most token-efficient for AI agents?The 2026 Global Intelligence Crisis‘A feedback loop with no brake’: how an AI doomsday report shook US marketsBlock shares soar as much as 24% as company slashes workforce by nearly halfEli Dourado's TweetChapters(00:00) - Introduction to ADI (02:55) - Pentagon Drama and AI Models (21:36) - OpenAI vs Anthropic: The Contract Controversy (28:19) - Innovations in AI: Interpretable Language Models (28:42) - Scaling Language Models and Their Implications (29:09) - Introduction to Verified Spec Driven Development (33:47) - Interview with Martin Alderson (55:21) - AI Bubble Watch: Current Trends and Predictions (58:47) - The Impact of AI on Job Markets (01:04:00) - Reflections on AI's Role in the Economy Connect with ADIPodEmail us at humans@adipod.ai you have any feedback, requests, or just want to say hello! Checkout our website www.adipod.ai
This episode covers Sonnet 4.6 and Gemini 3.1 Pro model releases, Taalas Labs FPGA-based 17K tokens/sec hardware, the Meta-AMD chip partnership, Steven Sinofsky's argument against "software is dead," a deep dive into the ThoughtWorks Future of Software Engineering retreat findings (from Agile Manifesto signers), Chris Roth's elite AI engineering culture article, a Vibe & Tell segment testing agent sycophancy across three models, and AI bubble economics.TakeawaysSonnet 4.6: Opus-level reasoning at Sonnet pricing; 72.5 on OS World (vs 61.4 for Sonnet 4.5); outperforms Opus 4.6 on agentic financial analysis; trained for computer useTaalas Labs FPGA hardware: 17K tokens/sec for Llama 3.1 8B; Chat Jimmy demo; custom hardware as future of inferenceSteven Sinofsky "Death of Software: Nah": Historical parallels (PC didnt kill mainframes, e-commerce didnt kill retail in 20 years, media death premature); predictions: more software, AI moves up stack, domain expertise more important; Jevons paradox applied to softwareThoughtWorks Future of Software Engineering retreat: Agile Manifesto 25th anniversary; where rigor goes (spec-driven development, red-green tests); risk tiering for code review; loss of mentoring through code review; DevEx vs agent experience decoupling; security as afterthought; the "middle loop" (overseeing agents); cognitive debt; agent topology mirroring org structure; knowledge graphs rediscovered; future roles converging; revenge of juniors (IBM hiring); self-healing systems (2-5 year horizon)Vibe & Tell — Agent sycophancy testing: Flat earth test (all three models resisted); workplace bias scenario (Jim/Jane); GPT 5.1 Instant best (refused all manipulation); Claude Haiku second (too empathetic, admitted to nudging); Gemini 3 worst (agreed with bias claim); AI as therapist risks; radical candor vs ruinous empathyResources MentionedIntroducing Claude Sonnet 4.6The path to ubiquitous AIOpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chipsDeath of Software. Nah.The future of software engineeringBuilding An Elite AI Engineering Culture In 2026The Number Is Going UpAn AI coding bot took down Amazon Web Services Chapters(00:00) - Introduction to AI in Software Engineering (01:13) - Latest AI Models and Hardware Innovations (04:10) - The Future of AI Hardware (10:01) - The Death of Software Debate (19:35) - The Agile Manifesto and Its Evolution (33:39) - The Impact of AI on Development Teams (34:52) - The Future of Junior Developers (37:11) - Self-Healing Systems and AI Assistance (39:33) - Building an Elite AI Engineering Culture (45:27) - AI Experiment and AI Sycophancy (55:33) - The AI Bubble Clock and Economic Implications Connect with ADIPodEmail us at humans@adipod.ai you have any feedback, requests, or just want to say hello! Checkout our website www.adipod.ai
This episode covers the Krabby Rathbun AI bot drama (automated PRs, fabricated hit piece, Ars Technica retraction), safety team shakeups at OpenAI and Anthropic, Gemini model distillation/cloning attempts, Perplexity model councils, and a heavily economics-flavored discussion on AI job displacement, tech debt as strategy, cognitive debt, and workflow automation convexity.TakeawaysArs Technica AI-generating an article about an AI bot drama — and getting caught fabricating quotes — is peak 2026 ironyDistillation/cloning is an unsolvable problem for frontier labs — they cant restrict usage without banning legitimate usersModel councils (running multiple models + synthesis) becoming practical; strongest model as judge, not necessarily the one generating answersCognitive debt may be more dangerous than tech debt — teams hit a wall when no one understands the codebase, usually around week 7-8 of heavy AI-assisted developmentWorkflow automation follows convexity: long period of minimal AI impact on jobs, then sudden full automation when AI can handle entire connected workflows, not just individual tasksResources MentionedAn AI Agent Published a Hit Piece on MeAI Bot crabby-rathbun is still goingExclusive: OpenAI disbanded its mission alignment teamMrinank Sharmas Departure LetterAttackers prompted Gemini over 100,000 times while trying to clone it, Google says Introducing Model Councilllm-councilWhy I’m not worried about AI job lossYou’re Not Taking On Enough Tech DebtHow Generative and Agentic AI Shift Concern from Technical Debt to Cognitive DebtWorkflows and AutomationPremium: The AI Data Center Financial CrisisThe SaaSpocalypse ParadoxChapters(00:00) - Introduction and Lunar New Year Celebrations (02:44) - AI Bot Controversy: Krabby Rathburn (05:04) - AI Alignment and Departures in Major Labs (07:39) - Google's Gemini and AI Cloning Concerns (10:08) - Tool Shed: Exploring Model Consoles (12:28) - Distillation and AI Model Development (21:11) - Model Pledge Drive and Console Approaches (23:07) - Post-Processing and AI's Impact on Work (23:57) - AI's Role in Job Security and Economic Productivity (30:30) - Reverse Centaurs and Naming Conventions (32:09) - Tech Debt and Cognitive Debt (36:24) - Cognitive Debt in AI-Assisted Programming (48:16) - Cultural Shifts in Responsibility (49:13) - Exploring Workflow and Automation (52:07) - The Impact of AI on Job Structures (54:23) - Tolerance for AI Mistakes (56:59) - Documenting Knowledge for AI (57:24) - Bifurcation of Tasks and Automation (59:34) - The Future of Meetings in an AI World (01:00:21) - State of the AI Bubble (01:03:41) - Market Dynamics and Investment Strategies Connect with ADIPodEmail us at humans@adipod.ai you have any feedback, requests, or just want to say hello! Checkout our website www.adipod.ai
This episode covers the simultaneous release of Claude Opus 4.6 and GPT Codex 5.3, a deep dive into the Pi coding agent framework and why Shimin prefers it over Claude Code, AI security industry criticism, software dark factories, an emotional segment mourning the craft of programming, Claude Code's new /insights command, and AI bubble economics including Anthropic's $20B raise, Google's 100-year bond, and Oracle's $50B debt plans.Takeaways The biggest compliment for Codex 5.3 is that it feels like Claude Code now Opus 4.6 auto-drops into plan mode and offers to clear context after planning — writes plan.md it can follow across interruptionsPi agent's skill-based approach may represent the bitter lesson of AI tooling — less scaffolding, more model intelligenceThe "everyone is a manager now" framing for agentic coding resonates — reduced dopamine from not doing work with your own handsContext switching burnout from running multiple agent instances is an emerging problemAI may freeze software innovation at whatever paradigm the training data captures (jQuery → React, but what comes after?)Resources MentionedIntroducing Claude Opus 4.6Introducing GPT-5.3-CodexOpus 4.6, Codex 5.3, and the post-benchmark eraPi coding agentThe AI Security Industry is BullshitSoftware Factories And The Agentic MomentWe mourn our craftAnthropic closes in on $20B roundOracle says it plans to raise up to $50 billion in debt and equity this year The New Announcement EconomyChapters(00:00) - Introduction to AI in Software Development (03:02) - Latest AI Model Releases and Comparisons (06:03) - Exploring AI Coding Agents (08:55) - The Rise of Py Coding Agent (12:08) - AI's Impact on Job Security (15:01) - AI Security Concerns and Industry Insights (33:08) - The Rise of AI Security Concerns (36:30) - De-risking AI: Strategies and Challenges (38:29) - The Emergence of Software Factories (41:19) - Cloning Software: The Digital Twin Universe (44:39) - In-house Development vs. SaaS Solutions (46:57) - The Future of Compliance and Audit Industries (51:52) - The Impact of AI on Software Development (56:37) - Navigating the Emotional Landscape of AI Development (01:07:55) - Mourning the Craft: A Country Song Reflection (01:09:51) - Building Beyond Loss: Tennyson's Ulysses (01:12:47) - Cloud Code Insights: Enhancing Development Workflows (01:19:09) - The AI Bubble: Current Trends and Predictions (01:24:00) - The Announcement Economy: News in the Age of AI (01:30:04) - The Future of AI: Investment and Market Dynamics Connect with ADIPodEmail us at humans@adipod.ai you have any feedback, requests, or just want to say hello! Checkout our website www.adipod.ai
In this episode, Dan and Shimin discuss the evolving landscape of AI programming, focusing on Anthropic's AI Constitution, OpenAI's new product Prism, and the implications of AI tools on coding skills. They explore the financial viability of AI companies, the concept of vibe coding, and the potential risks of an AI bubble. The conversation highlights the importance of understanding AI's impact on jobs and the ethical considerations surrounding AI development.TakeawaysAnthropic's AI Constitution raises questions about AI agency.AI tools can enhance or hinder coding skill development.The financial viability of AI companies is under scrutiny.Vibe coding can lead to a false sense of accomplishment.Resources Mentioned Does Anthropic believe its AI is conscious, or is that just what it wants Claude to think? Exclusive: Pentagon clashes with Anthropic over military AI use, sources sayOpenAI launches Prism, a new AI workspace for scientistsOpen Coding Agents: Fast, accessible coding agents that adapt to any repoTrinity LargeClawHubClawdBot Skills Just Ganked Your CryptoMoltBookSuperpowers: How I'm using coding agents in October 2025 My Five Stages of AI GriefHow AI assistance impacts the formation of coding skillsBreaking the Spell of Vibe CodingNvidia shares are down after a report that its OpenAI investment stalled. Here's what's happeningInside OpenAI's unit economicsChaptersConnect with ADIPodEmail us at humans@adipod.ai you have any feedback, requests, or just want to say hello! Checkout our website www.adipod.ai
In this episode of Artificial Developer Intelligence, Shimin, Dan and Rahul discuss the evolving landscape of AI in programming and business. They explore Brex's AI strategy, the AI fluency pyramid, the state of open models, and innovations in AI tools like Cloud Code. TakeawaysThe AI fluency pyramid helps assess AI integration levels.Open models are still dominated by Chinese companies.Claude Code is evolving with new swarm feature.The Claude Constitution aims to define ethical AI behavior.Economic disruption is a significant risk associated with AI.Resources MentionedBrex’s AI Hail Mary — With CTO James ReggioWho’s behind AMI Labs, Yann LeCun’s ‘world model’ startup8 plots that explain the state of open modelsGNOME's AI Assistant Newelle Adds Llama.cpp Support, Command Execution Toolhttps://www.getagentcraft.com/Claude Code SwarmsUnrolling the Codex agent loopThe Adolescence of TechnologyClaude’s ConstitutionWhat if AI is both really good and not that disruptive?A new test for AI labs: Are you even trying to make money?Are AI agents ready for the workplace? A new benchmark raises doubtsMicrosoft CEO warns that we must 'do something useful' with AI or they'll lose 'social permission' to burn electricity on itChapters(00:00) - Introduction to Artifical Developer Intelligence (02:44) - Brex's AI Transformation Journey (05:16) - AI Fluency Pyramid and Corporate Culture (07:56) - World Models and AI Poetry (10:30) - State of Open Models (12:38) - Emerging Tools and Technologies (15:04) - Claude Code and New Features (17:42) - AI in Gaming and Real-World Applications (23:02) - Open Source Collaboration in AI Development (24:53) - The Future of AI: Swarm Intelligence and Cloud Code (26:09) - Understanding the Codex Agent Loop (30:59) - Prompt Engineering and Model Limitations (33:33) - Ethical Considerations in AI Development (37:47) - The Risks of AI: Economic Disruption and Autocracy (40:46) - Finding Purpose in an AI-Driven World (44:32) - The Claude Constitution: Values and Guidelines (46:15) - The Role of AI in Society and Governance (50:21) - Building a Blog with AI Tools (55:51) - The AI Investment Bubble and Its Implications (01:04:05) - Davos Insights on AI and Sustainability (01:08:51) - ADI Intro.mp4 Connect with ADIPodEmail us at humans@adipod.ai you have any feedback, requests, or just want to say hello! Checkout our website www.adipod.ai
The podcast "Artificial Developer Intelligence" features hosts Shimin Zhang and co-host Dan Lasky discuss the evolving landscape of AI in programming, recent news, innovative tools, and the implications of AI on various sectors. They explore the partnership between Apple and Google, the concept of 'doom coding', and how humans make LLM like mistakes. The conversation also delves into the efficiency of programming languages, a deep dive into dynamic large concept models, and the societal perceptions of AI, culminating in a discussion about the potential AI bubble.TakeawaysThe introduction of ads in ChatGPT raises privacy concerns.Automation vs. augmentation is a key theme in AI's impact on jobs.AI tools like Gastown are changing the landscape of software development. AI has a decent baseline level of cognition already.Measuring developer productivity is a complex challenge.AI tools may not always lead to financial gains.Resources MentionedOpenAI to begin testing ads on ChatGPT in the U.S.Anthropic Economic Index report: economic primitivesGLM-4.7-FlashWelcome to Gas TownGas Town DecodedThe AI revolution is here. Will the economy survive the transition?Claude Code creator Boris shares his setup with 13 detailed steps,full details belowThe Unreasonable Effectiveness of RNNs Apptwo AI researchers are now funded by SolanaMajority of CEOs report zero payoff from AI splurgeAI companies will fail. We can salvage something from the wreckageChapters(00:00) - Introduction to the Podcast and Hosts (02:32) - Mad Max and AI: A Fun Introduction (03:21) - OpenAI's New Advertising Strategy (08:10) - Anthropic Economic Index Report Insights (15:19) - The Future of Work in an AI-Driven World (21:42) - Introducing Gas Town: A New Tool for AI Development (29:17) - The Quirky World of Software Naming Conventions (30:19) - Multi-Agent Systems: Pros and Cons (31:57) - The Philosophy of Gas Town: Embracing Chaos (33:39) - Tech Insights: Cloud Code and Agent Management (40:21) - The AI Revolution: Economic Implications and Productivity (51:07) - Technical Difficulties and Communication Challenges (51:31) - Exploring Gas Town and Workflow Innovations (56:38) - The Role of AI in Education (01:02:00) - The AI Bubble: Current State and Future Outlook Connect with ADIPodEmail us at humans@adipod.ai you have any feedback, requests, or just want to say hello! Checkout our website www.adipod.ai
The podcast "Artificial Developer Intelligence" features hosts Shimin Zhang and co-host Dan Lasky discuss the evolving landscape of AI in programming, recent news, innovative tools, and the implications of AI on various sectors. They explore the partnership between Apple and Google, the concept of 'doom coding', and how humans make LLM like mistakes. The conversation also delves into the efficiency of programming languages, a deep dive into dynamic large concept models, and the societal perceptions of AI, culminating in a discussion about the potential AI bubble.TakeawaysApple's partnership with Google marks a significant shift in AI development.Doom coding encourages productive use of time instead of doom scrolling.Public perception of AI is heavily influenced by marketing hype.Programming languages vary in token efficiency, affecting AI interactions.Dynamic large concept models offer a new approach to language processing.Resources MentionedGoogle’s Gemini to power Apple’s AI features like SiriChinese AI models have lagged the US frontier by 7 months on average since 2023doom-codingTimeCapsuleLLMEmergent Behavior: When Skills CombineLLM problems observed in humansWhich programming languages are most token-efficient?Dynamic Large Concept Models: Latent Reasoning in an Adaptive Semantic Space‘We’ve Done Our Country a Great Disservice’ by Offshoring: Nvidia’s Jensen Huang Says ‘We Have to Create Prosperity’ for All, Not Just PhDsComputer scientist Yann LeCun: “Intelligence really is about learning”Are we in an AI bubble? What 40 tech leaders and analysts are saying, in one chartChaptersConnect with ADIPodEmail us at humans@adipod.ai you have any feedback, requests, or just want to say hello! Checkout our website www.adipod.ai
The podcast "Artificial Developer Intelligence" features hosts Shimin Zhang and guest co-host Rahul Yadav discussing the evolving landscape of AI in software engineering. They cover recent AI-related acquisitions, such as Nvidia's purchase of Groq and Meta's acquisition of Manus, and explore the implications of these moves. The conversation also delves into the challenges and opportunities presented by AI in the tech industry, including the role of AI in automation and the potential for AI to reshape job roles. The episode concludes with a discussion on the AI bubble and its impact on the economy, highlighting the balance between technological advancement and financial stability.TakeawaysNvidia's acquisition of Groq highlights strategic tech investments.Meta's purchase of Manus aims to bolster AI capabilities.Even world class AI scientists can feel beyond on the rapidly developing AI field. Not all bubbles are negative, technological bubble can bring real efficiencies at the cost of investor capital.Resources MentionedHo Ho Ho, Groq+NVIDIA Is A GiftMeta Platforms buys Manus to bolster its agentic AI skillset Karpathy's Tweet Everyone is a Staff Engineer NowZhangDong's 2025 LetterAI faces closing time at the cash buffetAs A.I. Companies Borrow Billions, Debt Investors Grow WaryChapters(00:00) - Introduction to AI in Software Engineering (02:51) - Acquisitions in AI: Nvidia and Grok (05:11) - Meta's Acquisition of Manus (09:54) - Andrej Karpathy's Reflections on Programming (19:51) - Tool Shed: Gemini in Chrome (24:55) - Posts of the Week: Staff Engineers and Future Predictions (35:48) - Reflections on AI Progress and Future Predictions (42:25) - Innovations in Technical Writing with AI (51:13) - The Role of AI in Internal Documentation (59:44) - Navigating the AI Bubble: Current Trends and Insights (01:12:51) - ADI Intro.mp4 Connect with ADIPodEmail us at humans@adipod.ai you have any feedback, requests, or just want to say hello! Checkout our website www.adipod.ai
In this episode, Shimin and Dan explore the latest advancements in AI coding, including NVIDIA's new models, the implications of AI-generated code, and the outcome of Anthropic's project Vend, AI management of vending machines. They also discuss the significance of multi-agent systems in coding, the concept of vibe coding, and delve into the research on hallucination neurons in large language models. The episode concludes with a year-end review reflecting on the rapid developments in AI technology throughout 2025.TakeawaysAI-generated code has been found to create more problems than human code.AI in vending machines has led to humorous and unexpected outcomes.Multi-agent systems can enhance the coding process by providing diverse solutions.H-neurons in LLMs are linked to hallucination and overcompliance.Year-end reflections highlight the rapid adoption of AI in the industry.The future of AI coding looks promising with ongoing innovations.Resources MentionedNVIDIA Nemotron 3 Family of ModelsGLM-4.7: Advancing the Coding CapabilityOur new report: AI code creates 1.7x more problemsProject Vend: Phase twoClaude Code ChangelogOne Agent Isn't EnoughVibe CodingH-Neurons: On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs2025 LLM Year in ReviewChapters(00:00) - Introduction to AI Coding Landscape (05:00) - GLM 4.7 and Chinese AI Models (09:27) - Project Vend: AI Vending Machine Experiment (13:42) - Using Multiple AI Agents for Coding (22:51) - Exploring Agent-Based Approaches (30:28) - Deep Dive into Hallucination Neurons (36:07) - Dan's Rant: Context Management in AI Connect with ADIPodEmail us at humans@adipod.ai you have any feedback, requests, or just want to say hello! Checkout our website www.adipod.ai
In this episode of "Artificial Developer Intelligence," hosts Shimin Zhang and Dan explore the latest advancements in AI, including the release of GPT 5.2 and its implications for the industry. They discuss the integration of Cloud Code into Slack, Mistral AI's new coding model, and the innovative MindEval framework for assessing AI's clinical competence. The episode also features a deep dive into AI-generated user interfaces and a lively discussion on the evolving role of hackers in the tech industry. TakeawaysGPT 5.2 offers incremental improvements and new modes for AI applicationsCloud Code's integration into Slack aims to streamline coding workflows.Mistral AI's new model targets the coding space with open-weight strategies.OpenAI's enterprise products show significant adoption, especially in non-coding sectors.Resources MentionedIntroducing GPT-5.2Claude Code is coming to Slack, and that’s a bigger deal than it soundsMistral AI surfs vibe-coding tailwinds with new coding modelsIntroducing MindEval: a new framework to measure LLM clinical competenceAI should only run as fast as we can catch upUseful patterns for building HTML toolsAsk HN: How can I get better at using AI for programming?Claude Agent Skills: A First Principles Deep DiveGenerative UI: A rich, custom, visual interactive user experience for any promptCoreWeave CEO defends AI circular deals as ‘working together’OpenAI boasts enterprise win days after internal ‘code red’ on Google threatChapters(00:00) - Introduction to AI in Software Engineering (02:40) - Latest Developments in AI Models (09:12) - Innovations in AI Coding Assistants (12:11) - Benchmarking AI Clinical Competence (12:59) - Techniques for Effective AI Utilization (17:48) - Exploring AI Tools for Web Development (22:01) - Personal Experiences with AI Models (26:30) - Deep Dive into Claude's Agent Skills (27:40) - Exploring Skill Invocation in AI Tools (31:38) - Generative UI: The Future of Interactive Experiences (36:36) - Ranting About Context Management in AI (44:21) - The Hacker Ethos in Software Development (50:37) - Two Minutes to Midnight: AI Bubble Watch (51:40) - ADI Outro Connect with ADIPodEmail us at humans@adipod.ai you have any feedback, requests, or just want to say hello! Checkout our website www.adipod.ai
In this episode, Shimin and Dan explore the evolving landscape of AI in software engineering, discussing the implications of the Cloud Opus 4.5 sole document, the ethical considerations of AI models, and the impact of AI on developer productivity. They delve into spec-driven development, the latest advancements in AI models like DeepSeek v3.2, and the intersection of AI and mental health. The conversation also touches on the potential AI bubble and the challenges faced by developers in integrating AI tools effectively.TakeawaysThe Cloud Opus 4.5 sole document reveals insights into AI model training.Spec-driven development is a promising approach for AI-assisted coding.DeepSeek v3.2 showcases advancements in reasoning models.AI models can exhibit traits similar to human emotions and traumas.Skills in AI may not always resolve context issues effectively.Resources MentionedHow AI is transforming work at AnthropicClaude 4.5 Opus Soul Document12 Factor AgentsUnderstanding Spec-Driven-Development: Kiro, spec-kit, and TesslFrom DeepSeek V3 to V3.2: Architecture, Sparse Attention, and RL UpdatesWhen AI Takes the Couch: Psychometric Jailbreaks Reveal Internal Conflict in Frontier Models Are we really repeating the telecoms crash with AI datacenters?Anthropic CEO weighs in on AI bubble talk and risk-taking among competitors Time until the AI bubble burstsMicrosoft’s Attempts to Sell AI Agents Are Turning Into a DisasterChaptersConnect with ADIPodEmail us at humans@adipod.ai you have any feedback, requests, or just want to say hello! Checkout our website www.adipod.ai
In this episode of Artificial Developer Intelligence, hosts Shimin and Dan discuss the evolving landscape of AI in software engineering, touching on topics such as OpenAI's recent challenges, the significance of Google TPUs, and effective techniques for working with large language models. They also delve into a deep dive on general agentic memory, share insights on code quality, and assess the current state of the AI bubble.TakeawaysGoogle's TPUs are designed specifically for AI inference, offering advantages over traditional GPUs.Effective use of large language models requires avoiding common anti-patterns.AI adoption rates are showing signs of flattening out, particularly among larger firms.General agentic memory can enhance the performance of AI models by improving context management.Code quality remains crucial, even as AI tools make coding easier and faster.Smaller, more frequent code reviews can enhance team communication and project understanding.AI models are not infallible; they require careful oversight and validation of generated code.The future of AI may hinge on research rather than mere scaling of existing models.Resources MentionedOpenAI Code RedThe chip made for the AI inference era – the Google TPUAnti-patterns while working with LLMsWriting a good claude mdEffective harnesses for long-running agentsGeneral Agentic Memory Via Deep ResearchAI Adoption Rates Starting to Flatten OutA trillion dollars is a terrible thing to wasteChaptersConnect with ADIPodEmail us at humans@adipod.ai you have any feedback, requests, or just want to say hello! Checkout our website www.adipod.ai
In this episode of Artificial Developer Intelligence, hosts Shimin and Dan explore the latest advancements in AI models, including the release of Claude Opus 4.5 and Gemini 3. They discuss the implications of these models on software engineering, the rise of open-source models like Olmo 3, and the enhancements in the Claude Developer Platform. The conversation also delves into the challenges of relying on AI for coding tasks, the potential pitfalls of the AI bubble, and the future of written exams in the age of AI. Takeaways Claude Opus 4.5 setting benchmarks, enhance usability and reduce token consumption.The introduction of open-source models like Olmo 3 is a significant development in AI.The future of written exams may be challenged by AI's ability to generate human-like responses.Relying too heavily on AI can lead to a lack of critical thinking and problem-solving skills.The AI bubble is at 25s to midnightRecent research suggests that AI models can improve their performance through emulating query based search.The importance of prompt engineering in AI interactions is highlighted.Resources Mentioned Introducing Claude Opus 4.5Build with Nano Banana Pro, our Gemini 3 Pro Image modelAndrej Karpathy's Post about Nano Banana ProOlmo 3: Charting a path through the model flow to lead open-source AI Introducing advanced tool use on the Claude Developer PlatformTiDAR: Think in Diffusion, Talk in AutoregressionSSRL: SELF-SEARCH REINFORCEMENT LEARNINGMira Murati's Thinking Machines seeks $50 billion valuation in funding talks, Bloomberg News reportsBoom, bubble, bust, boom. Why should AI be different?Nvidia didn’t save the market. What’s next for the AI trade?Chapters(00:00) - Introduction to Artificial Developer Intelligence (01:25) - Claude Opus 4.5 (07:02) - Exploring Gemini 3 and Image Models (11:24) - Olmo 3 and The Rise of Open Flow Models (15:46) - Innovations in AI Tools and Platforms (19:33) - Research Insights: Diffusion and Auto-Regression Models (23:39) - Advancements in AI Output Efficiency (25:45) - Exploring Self Search Reinforcement Learning (27:48) - The Dilemma of Language Models (30:11) - Prompt Engineering and Search Integration (32:55) - Dan's Rants on AI Limitations (38:17) - 2 Minutes to Midnight (46:41) - Outro Connect with ADIPodEmail us at humans@adipod.ai you have any feedback, requests, or just want to say hello! Checkout our website www.adipod.ai
In this episode of Artificial Developer Intelligence, hosts Shimin and Dan explore the latest developments in AI, including Google's Gemini 3 model and its implications for software engineering. They discuss the rise of AI-driven cybersecurity threats, the concept of world models, and the evolving landscape of software development techniques. The conversation also delves into the ethical considerations of AI compliance and the challenges of running open weight models. Finally, they reflect on the current state of the AI bubble and its potential future.TakeawaysThe rent for running AI models is too high.The AI bubble may burst, but it can still leading to innovation.Persuasion techniques can influence AI behavior.World models are changing how we understand AI.Gemini 3 shows significant improvements over previous models.Cybersecurity threats are evolving with AI technology.Software development is becoming more meta-focused.Resources Mentioned Disrupting the first reported AI-orchestrated cyber espionage campaignGTIG AI Threat Tracker: Advances in Threat Actor Usage of AI ToolsWhy Fei-Fei Li, Yann LeCun and DeepMind Are All Betting on “World Models” — and How Their Bets Differ Google's new Gemini 3 model arrives in AI Mode and the Gemini appCode research projects with async coding agents like Claude Code and CodexADK architecture: When to use sub-agents versus agents as toolsI have seen the compounding teamsCall Me A Jerk: Persuading AI to Comply with Objectionable RequestsIn Search of the AI Bubble’s Economic FundamentalsThe Benefits of Bubbles | Stratechery by Ben Thompson Is Perplexity the first AI unicorn to fail?Chapters(00:00) - Introduction to Artificial Developer Intelligence (02:44) - AI in Cybersecurity: Threats and Innovations (07:35) - World Models: Understanding AI Cognition (11:41) - Gemini 3: A New Era for AI Models (13:31) - Benchmarking AI: The Vending Bench 2 (16:18) - Techniques for AI Development (18:59) - Code Search Use Case (22:11) - ADK Architecture (27:27) - Post of the Week: Compounding Teams (31:16) - Persuasion Techniques in AI: A Deep Dive (36:17) - Dan's Rant on The Cost of Running Open-Weight Models (45:09) - 2 Minutes to Midnight (57:45) - Outro Connect with ADIPodEmail us at humans@adipod.ai you have any feedback, requests, or just want to say hello! Checkout our website www.adipod.ai
In this episode of Artificial Developer Intelligence, hosts Shimin and Dan explore the rapidly evolving landscape of AI, discussing recent news, benchmarking challenges, and the implications of AGI as a conspiracy theory. They delve into the latest techniques in AI development, ethical considerations, and the potential impact of AI on human intelligence. The conversation culminates in the latest advancements in LLM architectures, and the ongoing concerns surrounding the AI bubble. TakeawaysBenchmarking AI performance is fraught with challenges and potential biases.AGI is increasingly viewed as a conspiracy theory rather than a technical goal.New LLM architectures are emerging to address context limitations.Ethical dilemmas in AI models raise questions about their decision-making capabilities.The AI bubble may lead to significant economic consequences.AI's influence on human intelligence is a growing concern among.Resources Mentioned:AI benchmarks are a bad joke – and LLM makers are the ones laughing Technology Radar V33 How I use Every Claude Code Feature How AGI became the most consequential conspiracy theory of our timeBeyond Standard LLMsStress-testing model specs reveals character differences among language modelsMeet Project Suncatcher, Google’s plan to put AI data centers in space OpenAI CFO Sarah Friar says company isn’t seeking government backstop, clarifying prior commentChapters:(00:00) - Introduction to Artificial Developer Intelligence (02:26) - AI Benchmarks: Are They Reliable? (08:02) - ThoughtWorks Tech Radar: AI-Centric Trends (11:47) - Techniques Corner: Exploring AI Subagents (14:17) - AGI: The Most Consequential Conspiracy Theory (22:57) - Deep Dive: Limitations of Current LLM Architectures (34:13) - Ethics and Decision-Making in AI (38:41) - Dan's Rant on the Impact of AI on Human Intelligence (43:26) - 2 Minutes to Midnight (50:29) - Outro Connect with ADIPod:Check out our website: www.ADIpod.ai
Comments 
loading