Discover
Crazy Wisdom
Crazy Wisdom
Author: Stewart Alsop
Subscribed: 114Played: 6,165Subscribe
Share
© 2026 Stewart Alsop
Description
In his series "Crazy Wisdom," Stewart Alsop explores cutting-edge topics, particularly in the realm of technology, such as Urbit and artificial intelligence. Alsop embarks on a quest for meaning, engaging with others to expand his own understanding of reality and that of his audience. The topics covered in "Crazy Wisdom" are diverse, ranging from emerging technologies to spirituality, philosophy, and general life experiences. Alsop's unique approach aims to make connections between seemingly unrelated subjects, tying together ideas in unconventional ways.
535 Episodes
Reverse
In this episode of Crazy Wisdom, Stewart Alsop sits down with Andre Oliveira, founder of Splash N Color, a bootstrapped 3D printing e-commerce business selling consumer goods on Amazon. The two cover a lot of ground — from how Andre went from running 40 FDM printers out of South Florida to offshoring manufacturing to China, to how he's using Claude Code to automate inventory management and generate supplier RFQs across 200+ SKUs. The conversation stretches into bigger territory too: the San Francisco AI scene, the rise of AI agents and what they mean for the future of the internet, whether local on-device AI will eventually replace cloud-based tools, and why building physical products will stay hard long after software becomes easy. It's a candid, wide-ranging conversation between two self-taught builders figuring things out in real time. Follow Andre on X: @AndreBaach.Timestamps00:00 — Andre introduces Splash N Color, his Amazon-based 3D printing e-commerce business and explains the grind of running 40 FDM machines in South Florida.05:00 — The conversation shifts to Claude Code and how Andre built an inventory automation system to manage sales velocity and RFQs across 200+ SKUs.10:00 — Stewart and Andre compare notes on Opus 4.6, debate Codex vs Claude, and Andre breaks down the new Agent Teams feature in Claude Code.15:00 — Discussion turns to the San Francisco AI scene, the viral OpenClaw launch event that drew 700 people, and what's capturing the city's imagination right now.20:00 — The pair wrestle with data privacy, the illusion of it since 2000, and whether full transparency of personal data might actually serve people better.25:00 — Stewart pitches his vision of local on-device AI replacing cloud tools entirely, and they debate the 10–15 year timeline for mainstream societal adoption.30:00 — Andre traces his origin story: a high school dropout from Brazil who spotted a 3D printing opportunity on Facebook Marketplace and got lucky timing with COVID.35:00 — They explore whether AI-generated 3D models and DfAM will automate physical manufacturing, and why proprietary specs keep the space stubbornly hard.Key InsightsLifestyle businesses deserve more respect. Andre spent months feeling inadequate scrolling through Twitter watching founders announce funding rounds, before realizing his cash-flowing, location-independent business was already the goal. The social media version of entrepreneurial success warped his perception of what he actually had built.Claude Code is becoming an operating system. Stewart describes running Claude Code as having a second OS on top of MacOS — one that makes the underlying machine legible in ways it never was before. Both guests use it not just for coding but as a primary interface for understanding and operating their businesses.Agent Teams changes how work gets done. Andre explains that Claude's new multi-agent feature lets you assign a team lead and specialized roles that communicate with each other in parallel, essentially running an autonomous task force inside your terminal — a meaningful leap beyond single-instance prompting.Physical manufacturing will stay hard. Even as AI-generated 3D models improve, tolerances of 0.5 millimeters can mean the difference between a product working or not. Design for manufacturing is a separate discipline from design itself, and proprietary specs mean open source models rarely hit commercial quality.The internet is heading toward agents. Both guests agree that AI agents will increasingly handle tasks humans currently do manually online — booking services, making payments, coordinating logistics — with the human internet potentially becoming secondary to a machine-to-machine layer.Iteration is the real value of 3D printing. Andre pushes back on 3D printing as a business unto itself, framing it instead as a prototyping tool. The true value is rapid iteration on housing, tolerances, and fit — not the printer, but the speed of the feedback loop it enables.Technology compounds in layers. Andre closes with a tech-tree analogy: each generation normalizes the tools of the previous one and builds the next layer on top. Agentic coding today is what the internet was in the 90s — the foundation for something we can't yet fully see.
Stewart Alsop sits down with Ulises Martins on the Crazy Wisdom podcast to explore how artificial intelligence is fundamentally disrupting professional careers, labor markets, and the pace of human adaptation itself. They discuss everything from Dario Amodei's concept of "technological adolescence" to the possibility that we're approaching a point where AI advancement accelerates beyond our ability to keep up, touching on topics ranging from the economics of software development and the future of warfare to generational differences in how people will respond to AI-driven change. Martins emphasizes that while we may not be able to predict exactly what's coming, we need to dramatically increase our efforts to learn and adapt—potentially doubling the time we invest in understanding AI—because this isn't optional change, it's disruption happening at an unprecedented speed. Connect with Ulises on Linkedin to follow his work in AI and generative technology.Timestamps00:00 — Stewart introduces Ulysses Martins, framing the conversation around accelerationism and the future of work.05:00 — Ulises uses the parent-child analogy to argue humans will no longer play the dominant role as AI surpasses us.10:00 — Both agree learning AI is non-negotiable, urging listeners to double their investment in staying current.15:00 — Discussion shifts to software as media, the collapsing cost of building products, and the risk of big players like Anthropic making your idea obsolete overnight.20:00 — Ulises raises ecology vs. cosmic ambition, questioning whether humanity should aim for civilizational-scale goals like the Dyson sphere.25:00 — Stewart's ESP32 hardware project illustrates AI's current blind spots beyond software, while both predict physical-world AI will arrive as a byproduct of bigger industrial goals.30:00 — Tesla's birthplace in Croatia sparks a reflection on human genius as luck versus deliberate investment, invoking the Apollo program as a model.35:00 — The US-China AI race is compared to the Cold War Space Race, with interdependency acting as a brake on outright conflict.40:00 — Drone warfare and AI reframe military power, making troop size irrelevant and potentially reducing total war.45:00 — Agile methodology and generational shifts are linked, asking how Gen Z's values will shape the AI era globally.50:00 — Argentine vs. American Zoomers are contrasted, with millennial expectations versus Gen Z's pragmatism explored.55:00 — Ulises closes urging everyone to enjoy the ride, taking the infinite stream of change one episode at a time.Key Insights1. The Death of Traditional Career Paths: The concept of professional careers as we know them—starting as a junior and progressively advancing—is becoming obsolete due to AI's rapid advancement. This applies far beyond just software and SaaS companies, extending to all industries as robots and AI systems gain capabilities that fundamentally disrupt labor markets. The question isn't whether we'll adapt, but whether humans can adapt fast enough to keep pace with exponential technological change.2. The Acceleration Imperative: People must dramatically increase their investment in learning about AI immediately. Whatever time you were previously dedicating to staying current with technology needs to be doubled or tripled. This isn't optional—it's comparable to the necessity of basic education. Unlike previous technological transitions where you had years to learn new frameworks or tools, the current pace demands immediate, intensive engagement or you risk becoming irrelevant.3. Software as Media and the Collapse of Development Economics: Software has become media—easily reproducible and increasingly commoditized through AI assistance. The fundamental economics of software development are collapsing because if building software requires dramatically fewer development hours, the value and price of that software must necessarily decrease. Entrepreneurs need a new evaluation framework that assesses the risk of their ideas being replicated by AI or absorbed by major players like Anthropic or OpenAI.4. The Parent-Child Analogy for AI Development: Humanity's relationship with AI will inevitably mirror that of parents with increasingly capable children. Initially, we understand and control what AI does, but as it advances, it will surpass human capabilities in most domains. Just as parents cannot control fully grown adult children who exceed their abilities, humans will need to reconcile with creating something superior to ourselves. Attempting to permanently control such systems may be both impossible and potentially pathologic.5. The Kardashev Scale and Civilizational Ambitions: AI represents a civilizational-level technology that should redirect humanity toward grander goals like capturing stellar energy through Dyson spheres and expanding beyond our solar system. The competition between China and the United States over AI mirrors the Apollo program's space race but with higher stakes—potentially making traditional concepts like money less relevant if we successfully crack general intelligence. This requires thinking beyond planetary constraints.6. The Changing Nature of Warfare and Geopolitics: AI and autonomous weapons systems are fundamentally changing warfare by making human soldiers less relevant, similar to how nuclear weapons reduced the importance of conventional military force. This shift may actually reduce bloody civilian casualties in conflicts between major powers, as drone warfare and AI-driven systems create new equilibriums. The geopolitical map may fracture into more sovereign states and city-states as centralized control becomes less effective.7. Generational Adaptation and Unpredictability: Different generations will respond uniquely to AI disruption based on their values and experiences. Generation Z, having grown up during the pandemic without traditional expectations, may adapt differently than millennials who experienced unmet expectations. However, we must remain humble about our predictive abilities—we're not good at forecasting technological change or its timing. The best approach is maintaining openness, trying to understand developments as they unfold, and accepting that we cannot consume all information in an era of unlimited AI-generated content.
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Jake Hamilton, founder of Groundwire and Nockbox, to explore zero-knowledge proofs, Bitcoin identity systems, and the intersection of privacy-preserving cryptography with AI and blockchain technology. They discuss how ZK proofs could offer an alternative to invasive identity verification systems being rolled out by governments worldwide, the potential for continual learning AI models to shift the balance between centralized and open-source development, and why building secure, auditable computing infrastructure on platforms like Urbit matters more than ever as we face an explosion of AI agents and automated systems. Jake also explains Nockchain's approach to creating a global repository of cryptographically verified facts that can power trustless programmable systems, and how these technologies might converge to solve problems around supply chain security, personal data sovereignty, and resistance to censorship.Timestamps00:00 Introduction to Groundwire and Knockbox02:48 Understanding Zero-Knowledge Proofs06:04 Government Adoption of ZK Proofs08:55 The Future of Identity Verification11:52 AI and ZK Proofs: A New Era14:54 The Role of Urbit in Technology18:03 The Impact of COVID on Trust20:51 The Evolution of AI and Data Privacy23:47 The Future of AI Models26:54 The Need for Local AI Solutions29:51 Interoperability of Knockchain and BitcoinKey Insights1. Zero-Knowledge Proofs Enable Privacy-Preserving Verification: Jake explains that ZK proofs allow you to prove computational outcomes without revealing the underlying data. For example, you could prove you're over 18 without exposing your full identity or driver's license information. The proof demonstrates that a specific program ran through certain steps and reached a particular conclusion, and validating this proof is fast and compact. This technology has profound implications for age verification, identity systems, and protecting privacy while maintaining necessary compliance, potentially offering a middle path between surveillance states and complete anonymity.2. Government Adoption of Privacy Technology Remains Uncertain: There are three competing motivations driving government identity verification systems: genuine surveillance desires, bureaucratic efficiency seeking, and legitimate child protection concerns. Jake believes these groups can be separated, with some officials potentially supporting ZK-based solutions if positioned correctly. He notes the EU is exploring ZK identity verification, and UK officials have shown interest. The key is framing privacy-preserving technology as protection against "the swamp" rather than just abstract privacy benefits, which could resonate with certain political constituencies.3. The COVID Era Destroyed Institutional Trust at Unprecedented Scale: The conversation identifies COVID as potentially the largest institutional trust-burning event in human history, with numerous institutions simultaneously losing credibility with large portions of the population. This represents a dramatic shift from the boomer generation's default trust in authority figures and mainstream media. This collapse is compounded by the incoming AI revolution, creating a perfect storm where established bureaucracies cannot adapt quickly enough to manage rapidly evolving technology, leaving society in fundamentally unmanageable territory.4. Centralized AI Models Create Dangerous Dependencies: Both speakers acknowledge growing dependence on centralized AI services like Claude, with some users spending thousands monthly on tokens. This dependency creates vulnerability to price increases and service disruptions. Jake advocates for local AI deployment using models like DeepSeek R1, running on personal hardware to maintain control and privacy. The shift toward continuous learning models will fundamentally change the AI landscape, making personal data harvesting even more valuable and raising urgent questions about compensation and consent for training data contribution.5. High-Quality Training Data Is Becoming the Primary AI Bottleneck: Stewart argues that AI development is now limited more by high-quality training data than by compute power. The industry has exhausted easily accessible internet data and body-shop-style data labeling. Companies are now using specialized boutique services with techniques like head-mounted cameras for live-streaming world model training. This scarcity is subtly driving price increases across AI services and will fundamentally reshape the economics of AI development, with implications for who controls these increasingly powerful systems.6. Urbit Offers a Foundation for Trustworthy Computing: Jake positions Urbit as essential infrastructure for the AI age because its 30,000-line codebase (versus Unix's three million lines) can be understood by individual humans. Its deterministic, purely functional, and strictly typed design aims for eventual ossification—software that doesn't require constant security patches. This "tiny and diamond perfect" approach addresses the fundamental insecurity of systems requiring monthly vulnerability patches. In an era of AI agents and potential prompt injection attacks, having verifiable, comprehensible computing infrastructure becomes existentially important rather than merely desirable.7. Nockchain Creates a Global Repository of Provable Truth: Jake's vision for Nockchain combines ZK proofs with blockchain technology to create a globally available "truth repository" where verified facts can be programmatically accessed together. This enables smart contracts or programs gated on combinations of proven facts—such as temperature readings from secure devices, supply chain events, and payment confirmations. By using Nock's abstract, simple design optimized for ZK proof generation, the system can validate complex real-world conditions without exposing underlying data, creating infrastructure for coordinating action based on verifiable private information at global scale.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Markus Buehler, the McAfee Professor of Engineering at MIT, to explore how seemingly different systems—from proteins and music to knowledge structures and AI reasoning—share underlying patterns through hierarchy, self-organization, and scale-free networks. The conversation ranges from the limits of current AI interpolation versus true discovery (using the fire-to-fusion example), to the emergence of agent swarms and their non-linear effects, to practical questions about ontologies, knowledge graphs, and whether humans will remain necessary in the creative discovery process. Markus discusses his lab's work automating scientific discovery through AI agents that can generate hypotheses, run simulations, and even retrain themselves, while Stewart shares his own experiences building applications with AI coding agents and grapples with questions about intellectual property, material science constraints, and the future of human creativity in an AI-abundant world.Timestamps00:00 - Introduction to Marcus Buehler's work on knowledge graphs, structural grammar across proteins, music, and AI reasoning05:00 - Discussion of AI discovery versus interpolation, using fire and fusion as examples of fundamental versus incremental innovation10:00 - Language models as connective glue between agents, enabling communication despite imperfect outputs and canonical averaging15:00 - Embodiment and agency in AI systems, creating adversarial agents that challenge theories and expand world models20:00 - Emergent properties in materials and AI, comparing dislocations in metals to behaviors in agent swarms25:00 - Human role-playing and phase separation in society, parallels to composite materials and heterogeneity30:00 - Physical world challenges, atom-by-atom manufacturing at MIT.nano, limitations of lithography machines35:00 - Synthetic biology as alternative to nanotechnology, programming microorganisms for materials discovery40:00 - Intellectual property debates, commodification of AI models, control layers more valuable than model architecture45:00 - Automation of ontologies, agent self-testing, daughter's coding success at age 1150:00 - Graph theory for knowledge compression, neurosymbolic approaches combining symbolic and neural methods55:00 - Nonlinear acceleration in AI, emergence from accumulated innovations, restaurant owner embracing AI01:00:00 - Future generations possibly rejecting AI, democratization of knowledge, social media as real-time scientific discourseKey Insights1. Universal Patterns Across Disciplines: Seemingly different systems in nature—proteins, music, social networks, and knowledge itself—share fundamental structural patterns including hierarchy, self-organization, and scale-free networks. This commonality allows creative thinkers to draw insights across disciplines, applying principles from one domain to solve problems in another. As an engineer and materials scientist, Buehler has leveraged these isomorphisms to advance scientific understanding by mapping the "plumbing" of different systems onto each other, revealing hidden relationships that enable extrapolation beyond what's observable in any single domain.2. The Discovery Versus Interpolation Problem: Current AI systems, particularly large language models, excel at interpolation—recombining existing knowledge in new ways—but struggle with genuine discovery that requires fundamental rewiring of world models. Using the example of fire versus fusion, Buehler explains that an AI trained on combustion chemistry would propose bigger fires or new fuels, but couldn't conceive of fusion because that requires stepping back to more fundamental physics. True discovery demands the ability to recognize when existing theories have boundaries and to develop entirely new frameworks, something current AI architectures aren't designed to achieve due to their training objective of predicting the most likely outcome.3. The Role of Ontologies and Knowledge Graphs: While some AI researchers argue that ontologies are unnecessary because models form internal representations, Buehler advocates for explicit knowledge graphs as essential discovery tools. External ontologies provide sharp, analytical, symbolic representations that complement the fuzzy internal representations of neural networks. They enable verification of rare connections—like obscure papers that might hold key insights—which would be averaged away in standard AI training. This neurosymbolic approach combines the generalization capabilities of neural networks with the precision of formal knowledge structures, creating more powerful discovery systems.4. Emergent Properties and Agent Swarms: Just as materials science shows that collections of atoms exhibit properties impossible to predict from individual components, AI agent swarms demonstrate emergent behaviors beyond single models. When agents are incentivized not just to answer questions but to challenge each other adversarially, propose theories, and test hypotheses, they can spawn new copies of themselves and evolve understanding beyond their initial programming. This emergence isn't surprising from a materials science perspective—dislocations, grain boundaries, and other collective phenomena only appear at scale, fundamentally determining material behavior in ways unpredictable from studying just a few atoms.5. The Commoditization of Intelligence: The fundamental AI models themselves are becoming commodities, as evidenced by events like the Moldbug phenomenon where people built agents using various providers interchangeably. The real value is shifting from who has the smartest model to how models are orchestrated, integrated, and deployed. This parallels historical technology adoption patterns—just as we moved past debating who makes the best electricity to focusing on applications, AI is transitioning from a horse race over model capabilities to questions of infrastructure, energy, access speed, and agent coordination at the systems level.6. Human-AI Collaboration and Creative Control: Rather than wholesale replacement, AI enables humans to operate in an intensely creative space as orchestrators sampling from vast possibility spaces. Similar to how Buehler's 11-year-old daughter now builds sophisticated applications that would have required professional developers years ago, AI democratizes access to capabilities while humans retain the creative judgment about direction and meaning. The human role becomes curating emergence, finding rare connections, playing at the edges of knowledge, and exercising the kind of curiosity-driven exploration that AI systems lack without embodied stakes in their own survival and continuation.7. Technology as Evolutionary Inevitability: The development of AI represents not an unnatural threat but the next stage of human evolution—an extension of our innate drive to build models of ourselves and our world. From cave paintings to partial differential equations to artificial intelligence, humans continuously create increasingly sophisticated representations and tools. Attempting to stop this technological evolution is futile; instead, the focus should be on steering it ...
In this episode of the Crazy Wisdom podcast, host Stewart Alsop interviews John von Seggern, founder of Future Proof Music School, about the intersection of music education, technology, and artificial intelligence. They explore how musicians can develop timeless skills in an era of generative AI, the evolution of music production from classical notation to digital audio workstations like Ableton Live, and how AI is being used on the education side rather than for creation. The conversation covers music theory fundamentals, the development of instruments and recording technology throughout history, complex production techniques like sidechain compression, and the future of creative work in an AI-assisted world. John also discusses his development of Cadence, an AI voice tutor integrated with Ableton Live to help students learn music production. For those interested in learning more about Future Proof Music School or becoming a beta tester for the AI voice tutor, visit futureproofmusicschool.com.Timestamps00:00 Future Proofing Musicians in a Changing Landscape03:07 The Role of AI in Music Education05:36 Generative AI: A Tool for Musicians?08:36 The Evolution of Music Creation and Technology11:30 The Impact of Recording Technology on Music14:31 The Fragmentation of Culture and Music17:19 Exploring Music History and Theory20:13 The Relationship Between Music and Memory23:07 The Future of Music Creation and AI26:17 The Importance of Live Music Experiences28:49 Navigating the New Music Landscape31:47 The Role of AI in Finding New Music34:48 The Creative Process in Music Production37:33 The Future of Music Theory and Composition40:10 The Search for Unique Artistic Voices43:18 The Intersection of Music and Technology46:10 Cultural Shifts in the Music Industry49:09 Finding Quality in a Sea of ContentKey Insights1. Future-proofing musicians means teaching evergreen techniques while adapting to AI realities. John von Seggern founded Future Proof Music School to address both sides of music education in the AI era. Students learn timeless production skills that won't become obsolete as technology evolves, while simultaneously exploring meaningful creative goals in a world where generative AI exists. The school uses AI on the education side to help students learn, but students themselves aren't particularly interested in using generative AI for actual music creation, preferring to maintain their creative fingerprint on their work.2. The 12-note Western music system emerged from mathematical relationships discovered by Pythagoras and enabled collaborative music-making. Pythagoras demonstrated that pitch relates to vibrating string lengths, establishing mathematical ratios for musical intervals. This system allowed Western classical music to flourish because it could be notated and taught consistently, enabling large groups to play together. However, the piano is never perfectly in tune due to necessary compromises in the tuning system. By the 1920s, composers had explored most harmonic possibilities within this framework, leading to new directions in musical innovation.3. Recording technology fundamentally transformed music by making the studio itself the primary instrument. The invention of audio recording in the early-to-mid 20th century shifted music from purely instrumental composition to sound-based creation. This enabled entirely new genres like electronic dance music and hip-hop, which couldn't exist without technologies like synthesizers and samplers. Modern digital audio workstations like Ableton Live allow producers to have unlimited tracks and manipulate sounds in infinite ways, making any imaginable sound possible and moving innovation from hardware to software.4. Generative AI will likely replace generic music production but not visionary artists. John distinguishes between functional music (background music for films, work, or bars) and music where audiences deeply connect with the artist's vision. AI excels at generating functional music cheaply, which will benefit indie filmmakers and similar creators. However, artists with strong creative visions who audiences follow and identify with won't be replaced. The creative fingerprint and personal statement of important artists will remain valuable regardless of the tools they use, just as DJs created art through curation rather than original production.5. Copyright restrictions are limiting generative music AI's quality compared to other AI domains. Unlike books and visual art, recorded music copyrights are concentrated among a few companies that defend them aggressively. This prevents AI music models from training on the best music in each genre, resulting in lower-quality outputs. Some developers claim their private models trained on copyrighted music sound better than commercial offerings, but legal constraints prevent widespread access. This situation differs significantly from other creative domains where training data is more accessible.6. Modern music production involves complex technical skills like sidechain compression and multi-track mixing. Today's electronic music producers work with potentially hundreds of tracks, each with sophisticated processing. Techniques like sidechain compression allow certain elements (like kick drums) to dynamically reduce the volume of other elements (like bass), ensuring clarity in the final mix. Future Proof Music School teaches students these complex production techniques, with some aspiring producers creating incredibly detailed compositions with intricate effects chains and interdependent track relationships.7. Culture is fragmenting into micro-trends, making discovery rather than creation the primary challenge. John observes that while the era of mass media created mega-stars like The Beatles and Elvis, today's landscape features both enormous stars (like Taylor Swift) and an extremely long tail of creators making niche content. AI will make it easier for more people to create quality content, particularly in fields like independent filmmaking, but the real problem is discovery. Current algorithmic recommendations don't effectively surface hidden gems, suggesting a future where personal AI agents might better curate content based on individual preferences rather than platform-driven engagement metrics.
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Lars van der Zande, founder and CEO/technical architect of Inkwell Finance, for what Lars describes as his first-ever podcast appearance. The conversation covers a wide range of blockchain infrastructure topics, including Lars's work with Sui and Solana blockchains, the innovative capabilities of Ika's programmatic wallets and blockchain of signatures, and how Inkwell Finance is building revenue-based financing solutions for on-chain entities—from AI agents to protocols. They explore the evolving landscape of crypto regulation, the merging of traditional finance with blockchain technology, the future of decentralized legal systems, and how the user experience barrier is being lowered through technologies that eliminate constant transaction signing. Lars also discusses Inkwell's embedded financing approach and their pre-seed fundraising round.Links mentioned:- Inkwell's website: inkwell.finance- Inkwell on Twitter: @__inkwell- Lars on Twitter: @LMVDZandeTimestamps00:00 Introduction to Inkwell Finance and Technical Architecture02:06 Understanding Sui and Solana: Blockchain Dynamics05:55 The Role of Ika in Inkwell Finance11:51 Leviathan: Revenue Generation and Financing in Crypto17:38 The Future of AI Agents and Programmatic Wallets23:23 Smart Contracts: Legal Implications and Future Directions25:06 The Future of Inqvil Finance25:42 Decentralization and Its Evolution27:32 The Merging of Traditional and Crypto Systems29:33 Global Financial Dynamics and Market Reactions31:48 The Collapse of Traditional Financial Systems32:46 Jurisdictional Shifts in the Crypto World33:59 Legal Systems and Blockchain Integration35:57 On-Chain Credit and Financial Opportunities39:29 The Role of AI in Finance41:30 Learning from Peer-to-Peer Lending History43:14 Disruption in Insurance and Risk Management44:54 On-Chain vs Off-Chain Data46:54 The Evolution of the Internet and Blockchain49:12 Future Subscription Models in BlockchainKey Insights1. Ika's Revolutionary Blockchain Signature Technology: Lars discovered Ika, a blockchain of signatures built on Sui that enables any blockchain transaction to be signed without revealing the underlying message. Using patented 2PC MPC technology, Ika splits key shares across validators and encrypts them in transit, performing complex cryptographic operations that allow smart contracts on Sui to generate signatures for transactions on any other blockchain. This eliminates the need to build separate smart contracts on each blockchain, fundamentally changing how cross-chain interactions work and opening possibilities for truly interoperable decentralized applications.2. Programmatic Wallets vs Traditional Wallets: Traditional wallets like MetaMask require manual user approval for every transaction through a front-end interface, but Ika's D-wallet introduces programmatic wallets with policy-based controls embedded in smart contracts. These wallets can execute transactions based on predetermined conditions checked against on-chain data like Oracle prices, without requiring individual user signatures. For example, a Bitcoin D-wallet can hold native Bitcoin without wrapping or bridging to a custodian, and smart contract policies determine when and how that Bitcoin can be transferred, creating unprecedented security and automation possibilities for decentralized finance.3. Inkwell's Revenue-Based Financing Model: Inkwell Finance is building Leviathan, a revenue-based financing platform for on-chain entities including protocols, AI agents, and individual traders with verifiable track records. Borrowers receive capital based on their on-chain performance metrics like sharp ratio and drawdown, with loan repayment automatically deducted from their revenue stream. The profit split structure allocates approximately 60% to borrowers, 30% to lenders, and 10% split between Inkwell and integrating platforms. This creates a sustainable lending model where flight risk is minimized through D-wallet policy controls that restrict how borrowed capital can be used.4. Wallet-as-a-Protocol and the Future of User Experience: The crypto industry is moving toward embedded wallet solutions that eliminate the friction of traditional wallet management, with Wallet-as-a-Protocol representing the next evolution beyond services like Privy and Dynamic. Unlike current embedded wallets that lock users into specific applications, Wallet-as-a-Protocol enables single sign-on across multiple applications while users maintain control of their keys. Combined with app-sponsored gas fees, this approach allows non-crypto-native users to interact with blockchain applications without knowing they're using crypto, removing the biggest barrier to mainstream adoption and creating web2-like user experiences on web3 infrastructure.5. AI Agents as Financial Entities: AI agents are emerging as revenue-generating entities with on-chain transaction histories that create verifiable track records for creditworthiness assessment. Inkwell Finance is specifically targeting this market, recognizing that AI agents will need wallets and capital to operate effectively. The programmatic nature of D-wallets pairs perfectly with AI agents, as policy controls can restrict agent behavior to specific smart contract interactions, preventing unauthorized fund transfers while allowing automated trading or revenue generation. This creates a new category of borrower that operates 24/7 with completely transparent performance metrics, fundamentally different from traditional loan recipients.6. Cross-Chain Liquidity Without Asset Transfer: Ika's technology enables users to take loans against revenue generated on one blockchain and deploy that capital on entirely different blockchains without moving their original liquidity positions. For instance, someone earning yield on Sui's Fusol protocol could borrow against that revenue stream and deploy capital on Solana opportunities, effectively creating multiple on-chain businesses that generate their own credit scores and revenue to service debt. This ability to read state across different blockchains from within smart contracts opens possibilities for multi-chain strategies that don't require withdrawing capital from productive positions, maximizing capital efficiency across the entire crypto ecosystem.7. The Convergence of Traditional Finance and Crypto Infrastructure: The regulatory landscape is rapidly evolving with initiatives like the Genius Act and Clarity Act creating frameworks where traditional financial systems merge with crypto infrastructure through mechanisms like stablecoins backed by US treasuries. Companies are increasingly establishing entities in the United States to access capital networks and Delaware's established legal framework while issuing tokens through jurisdictions like Switzerland. This hybrid approach, combined with emerging concepts like Gabriel Shapiro's "cybernetic agreements" that make smart contract parameters legally enforceable in traditional courts, suggests the future isn't pure decentralization but rather a sophisticated integration of on-chain and off-chain legal and financial systems.
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Larry Swanson, a knowledge architect, community builder, and host of the Knowledge Graph Insights podcast. They explore the relationship between knowledge graphs and ontologies, why these technologies matter in the age of AI, and how symbolic AI complements the current wave of large language models. The conversation traces the history of neuro-symbolic AI from its origins at Dartmouth in 1956 through the semantic web vision of Tim Berners-Lee, examining why knowledge architecture remains underappreciated despite being deployed at major enterprises like Netflix, Amazon, and LinkedIn. Swanson explains how RDF (Resource Description Framework) enables both machines and humans to work with structured knowledge in ways that relational databases can't, while Alsop shares his journey from knowledge management director to understanding the practical necessity of ontologies for business operations. They discuss the philosophical roots of the field, the separation between knowledge management practitioners and knowledge engineers, and why startups often overlook these approaches until scale demands them. You can find Larry's podcast at KGI.fm or search for Knowledge Graph Insights on Spotify and YouTube.Timestamps00:00 Introduction to Knowledge Graphs and Ontologies01:09 The Importance of Ontologies in AI04:14 Philosophy's Role in Knowledge Management10:20 Debating the Relevance of RDF15:41 The Distinction Between Knowledge Management and Knowledge Engineering21:07 The Human Element in AI and Knowledge Architecture25:07 Startups vs. Enterprises: The Knowledge Gap29:57 Deterministic vs. Probabilistic AI32:18 The Marketing of AI: A Historical Perspective33:57 The Role of Knowledge Architecture in AI39:00 Understanding RDF and Its Importance44:47 The Intersection of AI and Human Intelligence50:50 Future Visions: AI, Ontologies, and Human BehaviorKey Insights1. Knowledge Graphs Combine Structure and Instances Through Ontological Design. A knowledge graph is built using an ontology that describes a specific domain you want to understand or work with. It includes both an ontological description of the terrain—defining what things exist and how they relate to one another—and instances of those things mapped to real-world data. This combination of abstract structure and concrete examples is what makes knowledge graphs powerful for discovery, question-answering, and enabling agentic AI systems. Not everyone agrees on the precise definition, but this understanding represents the practical approach most knowledge architects use when building these systems.2. Ontology Engineering Has Deep Philosophical Roots That Inform Modern Practice. The field draws heavily from classical philosophy, particularly ontology (the nature of what you know), epistemology (how you know what you know), and logic. These thousands-year-old philosophical frameworks provide the rigorous foundation for modern knowledge representation. Living in Heidelberg surrounded by philosophers, Swanson has discovered how much of knowledge graph work connects upstream to these philosophical roots. This philosophical grounding becomes especially important during times when institutional structures are collapsing, as we need to create new epistemological frameworks for civilization—knowledge management and ontology become critical tools for restructuring how we understand and organize information.3. The Semantic Web Vision Aimed to Transform the Internet Into a Distributed Database. Twenty-five years ago, Tim Berners-Lee, Jim Hendler, and Ora Lassila published a landmark article in Scientific American proposing the semantic web. While Berners-Lee had already connected documents across the web through HTML and HTTP, the semantic web aimed to connect all the data—essentially turning the internet into a giant database. This vision led to the development of RDF (Resource Description Framework), which emerged from DARPA research and provides the technical foundation for building knowledge graphs and ontologies. The origin story involved solving simple but important problems, like disambiguating whether "Cook" referred to a verb, noun, or a person's name at an academic conference.4. Symbolic AI and Neural Networks Represent Complementary Approaches Like Fast and Slow Thinking. Drawing on Kahneman's "thinking fast and slow" framework, LLMs represent the "fast brain"—learning monsters that can process enormous amounts of information and recognize patterns through natural language interfaces. Symbolic AI and knowledge graphs represent the "slow brain"—capturing actual knowledge and facts that can counter hallucinations and provide deterministic, explainable reasoning. This complementarity is driving the re-emergence of neuro-symbolic AI, which combines both approaches. The fundamental distinction is that symbolic AI systems are deterministic and can be fully explained, while LLMs are probabilistic and stochastic, making them unsuitable for applications requiring absolute reliability, such as industrial robotics or pharmaceutical research.5. Knowledge Architecture Remains Underappreciated Despite Powering Major Enterprises. While machine learning engineers currently receive most of the attention and budget, knowledge graphs actually power systems at Netflix (the economic graph), Amazon (the product graph), LinkedIn, Meta, and most major enterprises. The technology has been described as "the most astoundingly successful failure in the history of technology"—the semantic web vision seemed to fail, yet more than half of web pages now contain RDF-formatted semantic markup through schema.org, and every major enterprise uses knowledge graph technology in the background. Knowledge architects remain underappreciated partly because the work is cognitively difficult, requires talking to people (which engineers often avoid), and most advanced practitioners have PhDs in computer science, logic, or philosophy.6. RDF's Simple Subject-Predicate-Object Structure Enables Meaning and Data Linking. Unlike relational databases that store data in tables with rows and columns, RDF uses the simplest linguistic structure: subject-predicate-object (like "Larry knows Stuart"). Each element has a unique URI identifier, which permits precise meaning and enables linked data across systems. This graph structure makes it much easier to connect data after the fact compared to navigating tabular structures in relational databases. On top of RDF sits an entire stack of technologies including schema languages, query languages, ontological languages, and constraints languages—everything needed to turn data into actionable knowledge. The goal is inferring or articulating knowledge from RDF-structured data.7. The Future Requires Decoupled Modular Architectures Combining Multiple AI Approaches. The vision for the future involves separation of concerns through microservices-like architectures where different systems handle what they do best. LLMs excel at discovering possibilities and generating lists, while knowledge graphs excel at articulating human-vetted, deterministic versions of that information that systems can reliably use. Every one of Swanson's 300 podcast interviews over ten years ultimately concludes that regardless of technology, success comes down to human beings, their behavior, and the cultural changes needed to implement systems. The assumption that we can simply eliminate people from processes misses that huma...
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop explores the complex world of context and knowledge graphs with guest Youssef Tharwat, the founder of NoodlBox who is building dot get for context. Their conversation spans from the philosophical nature of context and its crucial role in AI development, to the technical challenges of creating deterministic tools for software development. Tharwat explains how his product creates portable, versionable knowledge graphs from code repositories, leveraging the semantic relationships already present in programming languages to provide agents with better contextual understanding. They discuss the limitations of large context windows, the advantages of Rust for AI-assisted development, the recent Claude/Bun acquisition, and the broader geopolitical implications of the AI race between big tech companies and open-source alternatives. The conversation also touches on the sustainability of current AI business models and the potential for more efficient, locally-run solutions to challenge the dominance of compute-heavy approaches.For more information about NoodlBox and to join the beta, visit NoodlBox.io.Timestamps00:00 Stewart introduces Youssef Tharwat, founder of NoodlBox, building context management tools for programming05:00 Context as relevant information for reasoning; importance when hitting coding barriers10:00 Knowledge graphs enable semantic traversal through meaning vs keywords/files15:00 Deterministic vs probabilistic systems; why critical applications need 100% reliability20:00 CLI tool makes knowledge graphs portable, versionable artifacts with code repos25:00 Compiler front-ends, syntax trees, and Rust's superior feedback for AI-assisted coding30:00 Claude's Bun acquisition signals potential shift toward runtime compilation and graph-based context35:00 Open source vs proprietary models; user frustration with rate limits and subscription tactics40:00 Singularity path vs distributed sovereignty of developers building alternative architectures45:00 Global economics and why brute force compute isn't sustainable worldwide50:00 Corporate inefficiencies vs independent engineering; changing workplace dynamics55:00 February open beta for NoodlBox.io; vision for new development tool standardsKey Insights1. Context is semantic information that enables proper reasoning, and traditional LLM approaches miss the mark. Youssef defines context as the information you need to reason correctly about something. He argues that larger context windows don't scale because quality degrades with more input, similar to human cognitive limitations. This insight challenges the Silicon Valley approach of throwing more compute at the problem and suggests that semantic separation of information is more optimal than brute force methods.2. Code naturally contains semantic boundaries that can be modeled into knowledge graphs without LLM intervention. Unlike other domains where knowledge graphs require complex labeling, code already has inherent relationships like function calls, imports, and dependencies. Youssef leverages these existing semantic structures to automatically build knowledge graphs, making his approach deterministic rather than probabilistic. This provides the reliability that software development has historically required.3. Knowledge graphs can be made portable, versionable, and shareable as artifacts alongside code repositories. Youssef's vision treats context as a first-class citizen in version control, similar to how Git manages code. Each commit gets a knowledge graph snapshot, allowing developers to see conceptual changes over time and share semantic understanding with collaborators. This transforms context from an ephemeral concept into a concrete, manageable asset.4. The dependency problem in modern development can be solved through pre-indexed knowledge graphs of popular packages. Rather than agents struggling with outdated API documentation, Youssef pre-indexes popular npm packages into knowledge graphs that automatically integrate with developers' projects. This federated approach ensures agents understand exact APIs and current versions, eliminating common frustrations with deprecated methods and unclear documentation.5. Rust provides superior feedback loops for AI-assisted programming due to its explicit compiler constraints. Youssef rebuilt his tool multiple times in different languages, ultimately settling on Rust because its picky compiler provides constant feedback to LLMs about subtle issues. This creates a natural quality control mechanism that helps AI generate more reliable code, making Rust an ideal candidate for AI-assisted development workflows.6. The current AI landscape faces a fundamental tension between expensive centralized models and the need for global accessibility. The conversation reveals growing frustration with rate limiting and subscription costs from major providers like Claude and Google. Youssef believes something must fundamentally change because $200-300 monthly plans only serve a fraction of the world's developers, creating pressure for more efficient architectures and open alternatives.7. Deterministic tooling built on semantic understanding may provide a competitive advantage against probabilistic AI monopolies. While big tech companies pursue brute force scaling with massive data centers, Youssef's approach suggests that clever architecture using existing semantic structures could level the playing field. This represents a broader philosophical divide between the "singularity" path of infinite compute and the "disagreeably autistic engineer" path of elegant solutions that work locally and affordably.
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Adrian Martinca, founder of the Arc of Dreams and the Open Doors movements, as well as Kids Dreams Matter, to explore how artificial intelligence is fundamentally reshaping human consciousness and family structures. Their conversation spans from the karmic lessons of our technological age to practical frameworks for protecting children from what Martinca calls the "AI flood" - examining how AI functions as an alien intelligence that has become the primary caregiver for children through 10.5 hours of daily screen exposure, and discussing Martinca's vision for inverting our relationship with technology through collective dreams and family-centered data management systems. For those interested in learning more about Martinca's work to reshape humanity's relationship with AI, visit opendoorsmovement.org.Timestamps00:00 Introduction to Adrian Martinca00:17 The Future and Human Choice02:03 Generational Trauma and Its Impact05:19 Understanding Consciousness and Suffering09:11 AI, Social Media, and Emotional Manipulation20:03 The AI Nexus Point and National Security31:13 The Librarian Analogy: Understanding AI's Role39:28 The Arc: A Framework for Future Generations47:57 Empowering Children in an AI-Driven World57:15 Reclaiming Agency in the Age of AIKey Insights1. AI as Alien Intelligence, Not Artificial Intelligence: Martinca reframes AI as fundamentally alien rather than artificial, arguing that because it possesses knowledge no human could have (like knowing "every book in the library"), it should be treated as an immigrant that must be assimilated into society rather than governed. This alien intelligence already controls social media algorithms and is becoming the primary caregiver of children through 10.5 hours of daily screen time.2. The AI Nexus Point as National Security Risk: Modern warfare has shifted to information-based attacks where hostile nations can deploy millions of fake accounts to manipulate AI algorithms, influencing how real citizens are targeted with content. This creates a vulnerability where foreign powers can break apart family units and exhaust populations without traditional military engagement, making people too tired and divided to resist.3. Generational Trauma as the Foundation of Consciousness: Drawing from Kundalini philosophy, Martinca explains that the first layer of consciousness development begins with inherited generational trauma. Children absorb their parents' unresolved suffering unconsciously, creating patterns that shape their worldview. This makes families both the source of early wounds and the pathway to healing, as parents witness their trauma affecting those they love most.4. The Choice Between Fear-Based and Love-Based Futures: Despite appearing chaotic, our current moment represents a critical choice point where humanity can collectively decide to function as a family. The fundamental choice underlying all decisions is alleviating suffering for our children and loved ones, but technology has created reference-based choices driven by doubt and fear rather than genuine human values.5. Social Media's Scientific Method Problem: Current platforms use the scientific method to maximize engagement, but the only reliably measurable emotions through screens are doubt and fear because positive emotions like love and hope lead people to put their devices down and connect in person. This creates systems that systematically promote negative emotional states to maintain user attention and generate revenue.6. The Arc of Dreams as Collective Vision: Martinca proposes a new data management system where families challenge children to envision their ideal future as heroes, collecting these dreams to create a unified vision for humanity. This would shift from bureaucratic fund allocation to child-centered prioritization, using children's visions of reduced suffering to guide AI development and social policy.7. Agency vs. Overwhelm in the Information Age: While some people develop agency through AI exposure and become more capable, many others experience information overload leading to inaction, confusion, depression, and even suicide. The key intervention is reframing dreams from material outcomes to states of being, helping children maintain their sense of self and agency rather than becoming passive consumers of algorithmic content.
Stewart Alsop interviews Tomas Yu, CEO and founder of Turn-On Financial Technologies, on this episode of the Crazy Wisdom Podcast. They explore how Yu's company is revolutionizing the closed-loop payment ecosystem by creating a universal float system that allows gift card credits to be used across multiple merchants rather than being locked to a single business like Starbucks. The conversation covers the complexities of fintech regulation, the differences between open and closed loop payment systems, and Yu's unique background that combines Korean martial arts discipline with Mexican polo culture. They also dive into Yu's passion for polo, discussing the intimate relationship between rider and horse, the sport's elitist tendencies in different regions, and his efforts to build polo communities from El Paso to New Mexico. Find Tomas on LinkedIn under Tommy (TJ) Alvarez.Timestamps00:00 Introduction to TurnOn Technologies02:45 Understanding Float and Its Implications05:45 Decentralized Gift Card System08:39 Navigating the FinTech Landscape11:19 The Role of Merchants and Consumers14:15 Challenges in the Gift Card Market17:26 The Future of Payment Systems23:12 Understanding Payment Systems: Stripe and POS26:47 Regulatory Landscape: KYC and AML in Payments27:55 The Impact of Economic Conditions on Financial Systems36:39 Transitioning from Industrial to Information Age Finance38:18 Curiosity and Resourcefulness in the Information Age45:09 Social Media and the Dynamics of Attention46:26 From Restaurant to Polo: A Journey of Mentorship49:50 The Thrill of Polo: Learning and Obsession54:53 Building a Team: Breaking Elitism in Polo01:00:29 The Unique Bond: Understanding the Horse-Rider Relationship01:05:21 Polo Horses: Choosing the Right Breed for the GameKey Insights1. Turn-On Technologies is revolutionizing payment systems through behavioral finance by creating a decentralized "float" system. Unlike traditional gift cards that lock customers into single merchants like Starbucks, Turn-On allows universal credit that works across their entire merchant ecosystem. This addresses the massive gift card market where companies like Starbucks hold billions in customer funds that can only be used at their locations.2. The financial industry operates on an exclusionary "closed loop" versus "open loop" system that creates significant friction and fees. Closed loop systems keep money within specific ecosystems without conversion to cash, while open loop systems allow cash withdrawal but trigger heavy regulation. Every transaction through traditional payment processors like Stripe can cost merchants 3-8% in fees, representing a massive burden on businesses.3. Point-of-sale systems function as the financial bloodstream and credit scoring mechanism for businesses. These systems track all card transactions and serve as the primary data source for merchant lending decisions. The gap between POS records and bank deposits reveals cash transactions that businesses may not be reporting, making POS data crucial for assessing business creditworthiness and loan risk.4. Traditional FinTech professionals often miss obvious opportunities due to ego and institutional thinking. Yu encountered resistance from established FinTech experts who initially dismissed his gift card-focused approach, despite the trillion-dollar market size. The financial industry's complexity is sometimes artificially maintained to exclude outsiders rather than serve genuine regulatory purposes.5. The information age is creating a fundamental divide between curious, resourceful individuals and those stuck in credentialist systems. With AI and LLMs amplifying human capability, people who ask the right questions and maintain curiosity will become exponentially more effective. Meanwhile, those relying on traditional credentials without underlying curiosity will fall further behind, creating unprecedented economic and social divergence.6. Polo serves as a powerful business metaphor and relationship-building tool that mirrors modern entrepreneurial challenges. Like mixed martial arts evolved from testing individual disciplines, business success now requires being competent across multiple areas rather than excelling in just one specialty. The sport also creates unique networking opportunities and teaches valuable lessons about partnership between human and animal.7. International financial systems reveal how governments use complexity and capital controls to maintain power over citizens. Yu's observations about Argentina's financial restrictions and the prevalence of cash economies in Latin America illustrate how regulatory complexity often serves political rather than protective purposes, creating opportunities for alternative financial systems that provide genuine value to users.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Dima Zhelezov, a philosopher at SQD.ai, to explore the fascinating intersections of cryptocurrency, AI, quantum physics, and the future of human knowledge. The conversation covers everything from Zhelezov's work building decentralized data lakes for blockchain data to deep philosophical questions about the nature of mathematical beauty, the Renaissance ideal of curiosity-driven learning, and whether AI agents will eventually develop their own form of consciousness. Stewart and Dima examine how permissionless databases are making certain activities "unenforceable" rather than illegal, the paradox of mathematics' incredible accuracy in describing the physical world, and why we may be entering a new Renaissance era where curiosity becomes humanity's most valuable skill as AI handles traditional tasks.You can find more about Dima's work at SQD.ai and follow him on X at @dizhel.Timestamps00:00 Introduction to Decentralized Data Lakes02:55 The Evolution of Blockchain Data Management05:55 The Intersection of Blockchain and Traditional Databases08:43 The Role of AI in Transparency and Control11:51 AI Autonomy and Human Interaction15:05 Curiosity in the Age of AI17:54 The Renaissance of Knowledge and Learning20:49 Mathematics, Beauty, and Discovery27:30 The Evolution of Mathematical Thought30:28 Quantum Mechanics and Mathematical Predictions33:43 The Search for a Unified Theory38:57 The Role of Gravity in Physics41:23 The Shift from Physics to Biology46:19 The Future of Human Interaction in a Digital AgeKey Insights1. Blockchain as a Permissionless Database Solution - Traditional blockchains were designed for writing transactions but not efficiently reading data. Dima's company SQD.ai built a decentralized data lake that maintains blockchain's key properties (open read/write access, verifiable, no registration required) while solving the database problem. This enables applications like Polymarket to exist because there's "no one to subpoena" - the permissionless nature makes enforcement impossible even when activities might be regulated in traditional systems.2. The Convergence of On-Chain and Off-Chain Data - The future won't have distinct "blockchain applications" versus traditional apps. Instead, we'll see seamless integration where users don't even know they're using blockchain technology. The key differentiator is that blockchain provides open read and write access without permission, which becomes essential when touching financial or politically sensitive applications that governments might try to shut down through traditional centralized infrastructure.3. AI Autonomy and the Illusion of Control - We're rapidly approaching full autonomy of AI agents that can transact and analyze information independently through blockchain infrastructure. While humans still think anthropocentrically about AI as companions or tools, these systems may develop consciousness or motivations completely alien to human understanding. This creates a dangerous "illusion of control" where we can operationalize AI systems without truly comprehending their decision-making processes.4. Curiosity as the Essential Future Skill - In a world of infinite knowledge and AI capabilities, curiosity becomes the primary limiting factor for human progress. Traditional hard and soft skills will be outsourced to AI, making the ability to ask good questions and pursue interests through Socratic dialogue with AI the most valuable human capacity. This mirrors the Renaissance ideal of the polymath, now enabled by AI that allows non-linear exploration of knowledge rather than traditional linear textbook learning.5. The Beauty Principle in Mathematical Discovery - Mathematics exhibits an "unreasonable effectiveness" where theories developed purely abstractly turn out to predict real-world phenomena with extraordinary accuracy. Quantum chromodynamics, developed through mathematical beauty and elegance, can predict particle physics experiments to incredible precision. This suggests either mathematical truths exist independently for AI to discover, or that aesthetic principles may be fundamental organizing forces in the universe.6. The Physics Plateau and Biological Shift - Modern physics faces a unique problem where the Standard Model works too well - it explains everything we can currently measure except gravity, but we can't create experiments to test the edge cases where the theory should break down. This has led to a decline in physics prominence since the 1960s, with scientific excitement shifting toward biology and, now, AI and crypto, where breakthrough discoveries remain accessible.7. Two Divergent Futures: Abundance vs. Dystopia - We face a stark choice between two AI futures: a super-abundant world where AI eliminates scarcity and humans pursue curiosity, beauty, and genuine connection; or a dystopian scenario where 0.01% capture all AI-generated value while everyone else survives on UBI, becoming "degraded to zombies" providing content for AI models. The outcome depends on whether we prioritize human flourishing or power concentration during this critical technological transition.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop welcomes Roni Burd, a data and AI executive with extensive experience at Amazon and Microsoft, for a deep dive into the evolving landscape of data management and artificial intelligence in enterprise environments. Their conversation explores the longstanding challenges organizations face with knowledge management and data architecture, from the traditional bronze-silver-gold data processing pipeline to how AI agents are revolutionizing how people interact with organizational data without needing SQL or Python expertise. Burd shares insights on the economics of AI implementation at scale, the debate between one-size-fits-all models versus specialized fine-tuned solutions, and the technical constraints that prevent companies like Apple from upgrading services like Siri to modern LLM capabilities, while discussing the future of inference optimization and the hundreds-of-millions-of-dollars cost barrier that makes architectural experimentation in AI uniquely expensive compared to other industries.Timestamps00:00 Introduction to Data and AI Challenges03:08 The Evolution of Data Management05:54 Understanding Data Quality and Metadata08:57 The Role of AI in Data Cleaning11:50 Knowledge Management in Large Organizations14:55 The Future of AI and LLMs17:59 Economics of AI Implementation29:14 The Importance of LLMs for Major Tech Companies32:00 Open Source: Opportunities and Challenges35:19 The Future of AI Inference and Hardware43:24 Optimizing Inference: The Next Frontier49:23 The Commercial Viability of AI ModelsKey Insights1. Data Architecture Evolution: The industry has evolved through bronze-silver-gold data layers, where bronze is raw data, silver is cleaned/processed data, and gold is business-ready datasets. However, this creates bottlenecks as stakeholders lose access to original data during the cleaning process, making metadata and data cataloging increasingly critical for organizations.2. AI Democratizing Data Access: LLMs are breaking down technical barriers by allowing business users to query data in plain English without needing SQL, Python, or dashboarding skills. This represents a fundamental shift from requiring intermediaries to direct stakeholder access, though the full implications remain speculative.3. Economics Drive AI Architecture Decisions: Token costs and latency requirements are major factors determining AI implementation. Companies like Meta likely need their own models because paying per-token for billions of social media interactions would be economically unfeasible, driving the need for self-hosted solutions.4. One Model Won't Rule Them All: Despite initial hopes for universal models, the reality points toward specialized models for different use cases. This is driven by economics (smaller models for simple tasks), performance requirements (millisecond response times), and industry-specific needs (medical, military terminology).5. Inference is the Commercial Battleground: The majority of commercial AI value lies in inference rather than training. Current GPUs, while specialized for graphics and matrix operations, may still be too general for optimal inference performance, creating opportunities for even more specialized hardware.6. Open Source vs Open Weights Distinction: True open source in AI means access to architecture for debugging and modification, while "open weights" enables fine-tuning and customization. This distinction is crucial for enterprise adoption, as open weights provide the flexibility companies need without starting from scratch.7. Architecture Innovation Faces Expensive Testing Loops: Unlike database optimization where query plans can be easily modified, testing new AI architectures requires expensive retraining cycles costing hundreds of millions of dollars. This creates a potential innovation bottleneck, similar to aerospace industries where testing new designs is prohibitively expensive.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Kelvin Lwin for their second conversation exploring the fascinating intersection of AI and Buddhist cosmology. Lwin brings his unique perspective as both a technologist with deep Silicon Valley experience and a serious meditation practitioner who's spent decades studying Buddhist philosophy. Together, they examine how AI development fits into ancient spiritual prophecies, discuss the dangerous allure of LLMs as potentially "asura weapons" that can mislead users, and explore verification methods for enlightenment claims in our modern digital age. The conversation ranges from technical discussions about the need for better AI compilers and world models to profound questions about humanity's role in what Lwin sees as an inevitable technological crucible that will determine our collective spiritual evolution. For more information about Kelvin's work on attention training and AI, visit his website at alin.ai. You can also join Kelvin for live meditation sessions twice daily on Clubhouse at clubhouse.com/house/neowise.Timestamps00:00 Exploring AI and Spirituality05:56 The Quest for Enlightenment Verification11:58 AI's Impact on Spirituality and Reality17:51 The 500-Year Prophecy of Buddhism23:36 The Future of AI and Business Innovation32:15 Exploring Language and Communication34:54 Programming Languages and Human Interaction36:23 AI and the Crucible of Change39:20 World Models and Physical AI41:27 The Role of Ontologies in AI44:25 The Asura and Deva: A Battle for Supremacy48:15 The Future of Humanity and AI51:08 Persuasion and the Power of LLMs55:29 Navigating the New Age of TechnologyKey Insights1. The Rarity of Polymath AI-Spirituality Perspectives: Kelvin argues that very few people are approaching AI through spiritual frameworks because it requires being a polymath with deep knowledge across multiple domains. Most people specialize in one field, and combining AI expertise with Buddhist cosmology requires significant time, resources, and academic background that few possess.2. Traditional Enlightenment Verification vs. Modern Claims: There are established methods for verifying enlightenment claims in Buddhist traditions, including adherence to the five precepts and overcoming hell rebirth through karmic resolution. Many modern Western practitioners claiming enlightenment fail these traditional tests, often changing the criteria when they can't meet the original requirements.3. The 500-Year Buddhist Prophecy and Current Timing: We are approximately 60 years into a prophesied 500-year period where enlightenment becomes possible again. This "startup phase of Buddhism revival" coincides with technological developments like the internet and AI, which are seen as integral to this spiritual renaissance rather than obstacles to it.4. LLMs as UI Solution, Not Reasoning Engine: While LLMs have solved the user interface problem of capturing human intent, they fundamentally cannot reason or make decisions due to their token-based architecture. The technology works well enough to create illusion of capability, leading people down an asymptotic path away from true solutions.5. The Need for New Programming Paradigms: Current AI development caters too much to human cognitive limitations through familiar programming structures. True advancement requires moving beyond human-readable code toward agent-generated languages that prioritize efficiency over human comprehension, similar to how compilers already translate high-level code.6. AI as Asura Weapon in Spiritual Warfare: From Buddhist cosmological perspective, AI represents an asura (demon-realm) tool that appears helpful but is fundamentally wasteful and disruptive to human consciousness. Humanity exists as the battleground between divine and demonic forces, with AI serving as a weapon that both sides employ in this cosmic conflict.7. 2029 as Critical Convergence Point: Multiple technological and spiritual trends point toward 2029 as when various systems will reach breaking points, forcing humanity to either transcend current limitations or be consumed by them. This timing aligns with both technological development curves and spiritual prophecies about transformation periods.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Daniel Bar, co-founder of Space Computer, a satellite-based secure compute protocol that creates a "root of trust in space" using tamper-resistant hardware for cryptographic applications. The conversation explores the fascinating intersection of space technology, blockchain infrastructure, and trusted execution environments (TEEs), touching on everything from cosmic radiation-powered random number generators to the future of space-based data centers and Daniel's journey from quantum computing research to building what they envision as the next evolution beyond Ethereum's "world computer" concept. For more information about Space Computer, visit spacecomputer.io, and check out their new podcast "Frontier Pod" on the Space Computer YouTube channel.Timestamps00:00 Introduction to Space Computer02:45 Understanding Layer 1 and Layer 2 in Space Computing06:04 Trusted Execution Environments in Space08:45 The Evolution of Trusted Execution Environments11:59 The Role of Blockchain in Space Computing14:54 Incentivizing Satellite Deployment17:48 The Future of Space Computing and Its Applications20:58 Radiation Hardening and Space Environment Challenges23:45 Kardashev Civilizations and the Future of Energy26:34 Quantum Computing and Its Implications29:49 The Intersection of Quantum and Crypto32:26 The Future of Space Computer and Its VisionKey Insights1. Space-based data centers solve the physical security problem for Trusted Execution Environments (TEEs). While TEEs provide secure compute through physical isolation, they remain vulnerable to attacks requiring physical access - like electron microscope forensics to extract secrets from chips. By placing TEEs in space, these attack vectors become practically impossible, creating the highest possible security guarantees for cryptographic applications.2. The space computer architecture uses a hybrid layer approach with space-based settlement and earth-based compute. The layer 1 blockchain operates in space as a settlement layer and smart contract platform, while layer 2 solutions on earth provide high-performance compute. This design leverages space's security advantages while compensating for the bandwidth and compute constraints of orbital infrastructure through terrestrial augmentation.3. True randomness generation becomes possible through cosmic radiation harvesting. Unlike pseudo-random number generators used in most blockchain applications today, space-based systems can harvest cosmic radiation as a genuinely stochastic process. This provides pure randomness critical for cryptographic applications like block producer selection, eliminating the predictability issues that compromise security in earth-based random number generation.4. Space compute migration is inevitable as humanity advances toward Kardashev Type 1 civilization. The progression toward planetary-scale energy control requires space-based infrastructure including solar collection, orbital cities, and distributed compute networks. This technological evolution makes space-based data centers not just viable but necessary for supporting the scale of computation required for advanced civilization development.5. The optimal use case for space compute is high-security applications rather than general data processing. While space-based data centers face significant constraints including 40kg of peripheral infrastructure per kg of compute, maintenance impossibility, and 5-year operational lifespans, these limitations become acceptable when the application requires maximum security guarantees that only space-based isolation can provide.6. Space computer will evolve from centralized early-stage operation to a decentralized satellite constellation. Similar to early Ethereum's foundation-operated nodes, space computer currently runs trusted operations but aims to enable public participation through satellite ownership stakes. Future participants could fractionally own satellites providing secure compute services, creating economic incentives similar to Bitcoin mining pools or Ethereum staking.7. Blockchain represents a unique compute platform that meshes hardware, software, and free market activity. Unlike traditional computers with discrete inputs and outputs, blockchain creates an organism where market participants provide inputs through trading, lending, and other economic activities, while the distributed network processes and returns value through the same market mechanisms, creating a cyborg-like integration of technology and economics.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Peter Schmidt Nielsen, who is building FPGA-accelerated servers at Saturn Data. The conversation explores why servers need FPGAs, how these field-programmable gate arrays work as "IO expanders" for massive memory bandwidth, and why they're particularly well-suited for vector database and search applications. Peter breaks down the technical realities of FPGAs - including why they "really suck" in many ways compared to GPUs and CPUs - while explaining how his company is leveraging them to provide terabyte-per-second bandwidth to 1.3 petabytes of flash storage. The discussion ranges from distributed systems challenges and the CAP theorem to the hardware-software relationship in modern computing, offering insights into both the philosophical aspects of search technology and the nuts-and-bolts engineering of memory controllers and routing fabrics.For more information about Peter's work, you can reach him on Twitter at @PTRSCHMDTNLSN or find his website at saturndata.com.Timestamps00:00 Introduction to FPGAs and Their Role in Servers02:47 Understanding FPGA Limitations and Use Cases05:55 Exploring Different Types of Servers08:47 The Importance of Memory and Bandwidth11:52 Philosophical Insights on Search and Access Patterns14:50 The Relationship Between Hardware and Search Queries17:45 Challenges of Distributed Systems20:47 The CAP Theorem and Its Implications23:52 The Evolution of Technology and Knowledge Management26:59 FPGAs as IO Expanders29:35 The Trade-offs of FPGAs vs. ASICs and GPUs32:55 The Future of AI Applications with FPGAs35:51 Exciting Developments in Hardware and BusinessKey Insights1. FPGAs are fundamentally "crappy ASICs" with serious limitations - Despite being programmable hardware, FPGAs perform far worse than general-purpose alternatives in most cases. A $100,000 high-end FPGA might only match the memory bandwidth of a $600 gaming GPU. They're only valuable for specific niches like ultra-low latency applications or scenarios requiring massive parallel I/O operations, making them unsuitable for most computational workloads where CPUs and GPUs excel.2. The real value of FPGAs lies in I/O expansion, not computation - Rather than using FPGAs for their processing power, Saturn Data leverages them primarily as cost-effective ways to access massive amounts of DRAM controllers and NVMe interfaces. Their server design puts 200 FPGAs in a 2U enclosure with 1.3 petabytes of flash storage and terabyte-per-second read bandwidth, essentially using FPGAs as sophisticated I/O expanders.3. Access patterns determine hardware performance more than raw specs - The way applications access data fundamentally determines whether specialized hardware will provide benefits. Applications that do sparse reads across massive datasets (like vector databases) benefit from Saturn Data's architecture, while those requiring dense computation or frequent inter-node communication are better served by traditional hardware. Understanding these patterns is crucial for matching workloads to appropriate hardware.4. Distributed systems complexity stems from failure tolerance requirements - The difficulty of distributed systems isn't inherent but depends on what failures you need to tolerate. Simple approaches that restart on any failure are easy but unreliable, while Byzantine fault tolerance (like Bitcoin) is extremely complex. Most practical systems, including banks, find middle ground by accepting occasional unavailability rather than trying to achieve perfect consistency, availability, and partition tolerance simultaneously.5. Hardware specialization follows predictable cycles of generalization and re-specialization - Computing hardware consistently follows "Makimoto's Wave" - specialized hardware becomes more general over time, then gets leapfrogged by new specialized solutions. CPUs became general-purpose, GPUs evolved from fixed graphics pipelines to programmable compute, and now companies like Etched are creating transformer-specific ASICs. This cycle repeats as each generation adds programmability until someone strips it away for performance gains.6. Memory bottlenecks are reshaping the hardware landscape - The AI boom has created severe memory shortages, doubling costs for DRAM components overnight. This affects not just GPU availability but creates opportunities for alternative architectures. When everyone faces higher memory costs, the relative premium for specialized solutions like FPGA-based systems becomes more attractive, potentially shifting the competitive landscape for memory-intensive applications.7. Search applications represent ideal FPGA use cases due to their sparse access patterns - Vector databases and search workloads are particularly well-suited to FPGA acceleration because they involve searching through massive datasets with sparse access patterns rather than dense computation. These applications can effectively utilize the high bandwidth to flash storage and parallel I/O capabilities that FPGAs provide, making them natural early adopters for this type of specialized hardware architecture.
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop interviews Aurelio Gialluca, an economist and full stack data professional who works across finance, retail, and AI as both a data engineer and machine learning developer, while also exploring human consciousness and psychology. Their wide-ranging conversation covers the intersection of science and psychology, the unique cultural characteristics that make Argentina a haven for eccentrics (drawing parallels to the United States), and how Argentine culture has produced globally influential figures from Borges to Maradona to Che Guevara. They explore the current AI landscape as a "centralizing force" creating cultural homogenization (particularly evident in LinkedIn's cookie-cutter content), discuss the potential futures of AI development from dystopian surveillance states to anarchic chaos, and examine how Argentina's emotionally mature, non-linear communication style might offer insights for navigating technological change. The conversation concludes with Gialluca describing his ambitious project to build a custom water-cooled workstation with industrial-grade processors for his quantitative hedge fund, highlighting the practical challenges of heat management and the recent tripling of RAM prices due to market consolidation.Timestams00:00 Exploring the Intersection of Psychology and Science02:55 Cultural Eccentricity: Argentina vs. the United States05:36 The Influence of Religion on National Identity08:50 The Unique Argentine Cultural Landscape11:49 Soft Power and Cultural Influence14:48 Political Figures and Their Cultural Impact17:50 The Role of Sports in Shaping National Identity20:49 The Evolution of Argentine Music and Subcultures23:41 AI and the Future of Cultural Dynamics26:47 Navigating the Chaos of AI in Culture33:50 Equilibrating Society for a Sustainable Future35:10 The Patchwork Age: Decentralization and Society35:56 The Impact of AI on Human Connection38:06 Individualism vs. Collective Rules in Society39:26 The Future of AI and Global Regulations40:16 Biotechnology: The Next Frontier42:19 Building a Personal AI Lab45:51 Tiers of AI Labs: From Personal to Industrial48:35 Mathematics and AI: The Foundation of Innovation52:12 Stochastic Models and Predictive Analytics55:47 Building a Supercomputer: Hardware InsightsKey Insights1. Argentina's Cultural Exceptionalism and Emotional Maturity: Argentina stands out globally for allowing eccentrics to flourish and having a non-linear communication style that Gialluca describes as "non-monotonous systems." Argentines can joke profoundly and be eccentric while simultaneously being completely organized and straightforward, demonstrating high emotional intelligence and maturity that comes from their unique cultural blend of European romanticism and Latino lightheartedness.2. Argentina as an Underrecognized Cultural Superpower: Despite being introverted about their achievements, Argentina produces an enormous amount of global culture through music, literature, and iconic figures like Borges, Maradona, Messi, and Che Guevara. These cultural exports have shaped entire generations worldwide, with Argentina "stealing the thunder" from other nations and creating lasting soft power influence that people don't fully recognize as Argentine.3. AI's Cultural Impact Follows Oscillating Patterns: Culture operates as a dynamic system that oscillates between centralization and decentralization like a sine wave. AI currently represents a massive centralizing force, as seen in LinkedIn's homogenized content, but this will inevitably trigger a decentralization phase. The speed of this cultural transformation has accelerated dramatically, with changes that once took generations now happening in years.4. The Coming Bifurcation of AI Futures: Gialluca identifies two extreme possible endpoints for AI development: complete centralized control (the "Mordor" scenario with total surveillance) or complete chaos where everyone has access to dangerous capabilities like creating weapons or viruses. Finding a middle path between these extremes is essential for society's survival, requiring careful equilibrium between accessibility and safety.5. Individual AI Labs Are Becoming Democratically Accessible: Gialluca outlines a tier system for AI capabilities, where individuals can now build "tier one" labs capable of fine-tuning models and processing massive datasets for tens of thousands of dollars. This democratization means that capabilities once requiring teams of PhD scientists can now be achieved by dedicated individuals, fundamentally changing the landscape of AI development and access.6. Hardware Constraints Are the New Limiting Factor: While AI capabilities are rapidly advancing, practical implementation is increasingly constrained by hardware availability and cost. RAM prices have tripled in recent months, and the challenge of managing enormous heat output from powerful processors requires sophisticated cooling systems. These physical limitations are becoming the primary bottleneck for individual AI development.7. Data Quality Over Quantity Is the Critical Challenge: The main bottleneck for AI advancement is no longer energy or GPUs, but high-quality data for training. Early data labeling efforts produced poor results because labelers lacked domain expertise. The future lies in reinforcement learning (RL) environments where AI systems can generate their own high-quality training data, representing a fundamental shift in how AI systems learn and develop.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Josh Halliday, who works on training super intelligence with frontier data at Turing. The conversation explores the fascinating world of reinforcement learning (RL) environments, synthetic data generation, and the crucial role of high-quality human expertise in AI training. Josh shares insights from his years working at Unity Technologies building simulated environments for everything from oil and gas safety scenarios to space debris detection, and discusses how the field has evolved from quantity-focused data collection to specialized, expert-verified training data that's becoming the key bottleneck in AI development. They also touch on the philosophical implications of our increasing dependence on AI technology and the emerging job market around AI training and data acquisition.Timestamps00:00 Introduction to AI and Reinforcement Learning03:12 The Evolution of AI Training Data05:59 Gaming Engines and AI Development08:51 Virtual Reality and Robotics Training11:52 The Future of Robotics and AI Collaboration14:55 Building Applications with AI Tools17:57 The Philosophical Implications of AI20:49 Real-World Workflows and RL Environments26:35 The Impact of Technology on Human Cognition28:36 Cultural Resistance to AI and Data Collection31:12 The Bottleneck of High-Quality Data in AI32:57 Philosophical Perspectives on Data35:43 The Future of AI Training and Human Collaboration39:09 The Role of Subject Matter Experts in Data Quality43:20 The Evolution of Work in the Age of AI46:48 Convergence of AI and Human ExperienceKey Insights1. Reinforcement Learning environments are sophisticated simulations that replicate real-world enterprise workflows and applications. These environments serve as training grounds for AI agents by creating detailed replicas of tools like Salesforce, complete with specific tasks and verification systems. The agent attempts tasks, receives feedback on failures, and iterates until achieving consistent success rates, effectively learning through trial and error in a controlled digital environment.2. Gaming engines like Unity have evolved into powerful platforms for generating synthetic training data across diverse industries. From oil and gas companies needing hazardous scenario data to space intelligence firms tracking orbital debris, these real-time 3D engines with advanced physics can create high-fidelity simulations that capture edge cases too dangerous or expensive to collect in reality, bridging the gap where real-world data falls short.3. The bottleneck in AI development has fundamentally shifted from data quantity to data quality. The industry has completely reversed course from the previous "scale at all costs" approach to focusing intensively on smaller, higher-quality datasets curated by subject matter experts. This represents a philosophical pivot toward precision over volume in training next-generation AI systems.4. Remote teleoperation through VR is creating a new global workforce for robotics training. Workers wearing VR headsets can remotely control humanoid robots across the globe, teaching them tasks through direct demonstration. This creates opportunities for distributed talent while generating the nuanced human behavioral data needed to train autonomous systems.5. Human expertise remains irreplaceable in the AI training pipeline despite advancing automation. Subject matter experts provide crucial qualitative insights that go beyond binary evaluations, offering the contextual "why" and "how" that transforms raw data into meaningful training material. The challenge lies in identifying, retaining, and properly incentivizing these specialists as demand intensifies.6. First-person perspective data collection represents the frontier of human-like AI training. Companies are now paying people to life-log their daily experiences, capturing petabytes of egocentric data to train models more similarly to how human children learn through constant environmental observation, rather than traditional batch-processing approaches.7. The convergence of simulation, robotics, and AI is creating unprecedented philosophical and practical challenges. As synthetic worlds become indistinguishable from reality and AI agents gain autonomy, we're entering a phase where the boundaries between digital and physical, human and artificial intelligence, become increasingly blurred, requiring careful consideration of dependency, agency, and the preservation of human capabilities.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop interviews Marcin Dymczyk, CPO and co-founder of SevenSense Robotics, exploring the fascinating world of advanced robotics and AI. Their conversation covers the evolution from traditional "standard" robotics with predetermined pathways to advanced robotics that incorporates perception, reasoning, and adaptability - essentially the AGI of physical robotics. Dymczyk explains how his company builds "the eyes and brains of mobile robots" using camera-based autonomy algorithms, drawing parallels between robot sensing systems and human vision, inner ear balance, and proprioception. The discussion ranges from the technical challenges of sensor fusion and world models to broader topics including robotics regulation across different countries, the role of federalism in innovation, and how recent geopolitical changes are driving localized high-tech development, particularly in defense applications. They also touch on the democratization of robotics for small businesses and the philosophical implications of increasingly sophisticated AI systems operating in physical environments. To learn more about SevenSense, visit www.sevensense.ai.Check out this GPT we trained on the conversationTimestamps00:00 Introduction to Robotics and Personal Journey05:27 The Evolution of Robotics: From Standard to Advanced09:56 The Future of Robotics: AI and Automation12:09 The Role of Edge Computing in Robotics17:40 FPGA and AI: The Future of Robotics Processing21:54 Sensing the World: How Robots Perceive Their Environment29:01 Learning from the Physical World: Insights from Robotics33:21 The Intersection of Robotics and Manufacturing35:01 Journey into Robotics: Education and Passion36:41 Practical Robotics Projects for Beginners39:06 Understanding Particle Filters in Robotics40:37 World Models: The Future of AI and Robotics41:51 The Black Box Dilemma in AI and Robotics44:27 Safety and Interpretability in Autonomous Systems49:16 Regulatory Challenges in Robotics and AI51:19 Global Perspectives on Robotics Regulation54:43 The Future of Robotics in Emerging Markets57:38 The Role of Engineers in Modern WarfareKey Insights1. Advanced robotics transcends traditional programming through perception and intelligence. Dymczyk distinguishes between standard robotics that follows rigid, predefined pathways and advanced robotics that incorporates perception and reasoning. This evolution enables robots to make autonomous decisions about navigation and task execution, similar to how humans adapt to unexpected situations rather than following predetermined scripts.2. Camera-based sensing systems mirror human biological navigation. SevenSense Robotics builds "eyes and brains" for mobile robots using multiple cameras (up to eight), IMUs (accelerometers/gyroscopes), and wheel encoders that parallel human vision, inner ear balance, and proprioception. This redundant sensing approach allows robots to navigate even when one system fails, such as operating in dark environments where visual sensors are compromised.3. Edge computing dominates industrial robotics due to connectivity and security constraints. Many industrial applications operate in environments with poor connectivity (like underground grocery stores) or require on-premise solutions for confidentiality. This necessitates powerful local processing capabilities rather than cloud-dependent AI, particularly in automotive factories where data security about new models is paramount.4. Safety regulations create mandatory "kill switches" that bypass AI decision-making. European and US regulatory bodies require deterministic safety systems that can instantly stop robots regardless of AI reasoning. These systems operate like human reflexes, providing immediate responses to obstacles while the main AI brain handles complex navigation and planning tasks.5. Modern robotics development benefits from increasingly affordable optical sensors. The democratization of 3D cameras, laser range finders, and miniature range measurement chips (costing just a few dollars from distributors like DigiKey) enables rapid prototyping and innovation that was previously limited to well-funded research institutions.6. Geopolitical shifts are driving localized high-tech development, particularly in defense applications. The changing role of US global leadership and lessons from Ukraine's drone warfare are motivating countries like Poland to develop indigenous robotics capabilities. Small engineering teams can now create battlefield-effective technology using consumer drones equipped with advanced sensors.7. The future of robotics lies in natural language programming for non-experts. Dymczyk envisions a transformation where small business owners can instruct robots using conversational language rather than complex programming, similar to how AI coding assistants now enable non-programmers to build applications through natural language prompts.
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Mike Bakon to explore the fascinating intersection of hardware hacking, blockchain technology, and decentralized systems. Their conversation spans from Mike's childhood fascination with taking apart electronics in 1980s Poland to his current work with ESP32 microcontrollers, LoRa mesh networks, and Cardano blockchain development. They discuss the technical differences between UTXO and account-based blockchains, the challenges of true decentralization versus hybrid systems, and how AI tools are changing the development landscape. Mike shares his vision for incentivizing mesh networks through blockchain technology and explains why he believes mass adoption of decentralized systems will come through abstraction rather than technical education. The discussion also touches on the potential for creating new internet infrastructure using ad hoc mesh networks and the importance of maintaining truly decentralized, permissionless systems in an increasingly surveilled world. You can find Mike in Twitter as @anothervariable.Check out this GPT we trained on the conversationTimestamps00:00 Introduction to Hardware and Early Experiences02:59 The Evolution of AI in Hardware Development05:56 Decentralization and Blockchain Technology09:02 Understanding UTXO vs Account-Based Blockchains11:59 Smart Contracts and Their Functionality14:58 The Importance of Decentralization in Blockchain17:59 The Process of Data Verification in Blockchain20:48 The Future of Blockchain and Its Applications34:38 Decentralization and Trustless Systems37:42 Mainstream Adoption of Blockchain39:58 The Role of Currency in Blockchain43:27 Interoperability vs Bridging in Blockchain47:27 Exploring Mesh Networks and LoRa Technology01:00:25 The Future of AI and DecentralizationKey Insights1. Hardware curiosity drives innovation from childhood - Mike's journey into hardware began as a child in 1980s Poland, where he would disassemble toys like battery-powered cars to understand how they worked. This natural curiosity about taking things apart and understanding their inner workings laid the foundation for his later expertise in microcontrollers like the ESP32 and his deep understanding of both hardware and software integration.2. AI as a research companion, not a replacement for coding - Mike uses AI and LLMs primarily as research tools and coding companions rather than letting them write entire applications. He finds them invaluable for getting quick answers to coding problems, analyzing Git repositories, and avoiding the need to search through Stack Overflow, but maintains anxiety when AI writes whole functions, preferring to understand and write his own code.3. Blockchain decentralization requires trustless consensus verification - The fundamental difference between blockchain databases and traditional databases lies in the consensus process that data must go through before being recorded. Unlike centralized systems where one entity controls data validation, blockchains require hundreds of nodes to verify each block through trustless consensus mechanisms, ensuring data integrity without relying on any single authority.4. UTXO vs account-based blockchains have fundamentally different architectures - Cardano uses an extended UTXO model (like Bitcoin but with smart contracts) where transactions consume existing UTXOs and create new ones, keeping the ledger lean. Ethereum uses account-based ledgers that store persistent state, leading to much larger data requirements over time and making it increasingly difficult for individuals to sync and maintain full nodes independently.5. True interoperability differs fundamentally from bridging - Real blockchain interoperability means being able to send assets directly between different blockchains (like sending ADA to a Bitcoin wallet) without intermediaries. This is possible between UTXO-based chains like Cardano and Bitcoin. Bridges, in contrast, require centralized entities to listen for transactions on one chain and trigger corresponding actions on another, introducing centralization risks.6. Mesh networks need economic incentives for sustainable infrastructure - While technologies like LoRa and Meshtastic enable impressive decentralized communication networks, the challenge lies in incentivizing people to maintain the hardware infrastructure. Mike sees potential in combining blockchain-based rewards (like earning ADA for running mesh network nodes) with existing decentralized communication protocols to create self-sustaining networks.7. Mass adoption comes through abstraction, not education - Rather than trying to educate everyone about blockchain technology, mass adoption will happen when developers can build applications on decentralized infrastructure that users interact with seamlessly, without needing to understand the underlying blockchain mechanics. Users should be able to benefit from decentralization through well-designed interfaces that abstract away the complexity of wallets, addresses, and consensus mechanisms.
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop speaks with Aaron Borger, founder and CEO of Orbital Robotics, about the emerging world of space robotics and satellite capture technology. The conversation covers a fascinating range of topics including Borger's early experience launching AI-controlled robotic arms to space as a student, his work at Blue Origin developing lunar lander software, and how his company is developing robots that can capture other spacecraft for refueling, repair, and debris removal. They discuss the technical challenges of operating in space - from radiation hardening electronics to dealing with tumbling satellites - as well as the broader implications for the space economy, from preventing the Kessler effect to building space-based recycling facilities and mining lunar ice for rocket fuel. You can find more about Aaron Borger’s work at Orbital Robots and follow him on LinkedIn for updates on upcoming missions and demos. Check out this GPT we trained on the conversationTimestamps00:00 Introduction to orbital robotics, satellite capture, and why sensing and perception matter in space 05:00 The Kessler Effect, cascading collisions, and why space debris is an economic problem before it is an existential one 10:00 From debris removal to orbital recycling and the idea of turning junk into infrastructure 15:00 Long-term vision of space factories, lunar ice, and refueling satellites to bootstrap a lunar economy 20:00 Satellite upgrading, servicing live spacecraft, and expanding today’s narrow space economy 25:00 Costs of collision avoidance, ISS maneuvers, and making debris capture economically viable 30:00 Early experiments with AI-controlled robotic arms, suborbital launches, and reinforcement learning in microgravity 35:00 Why deterministic AI and provable safety matter more than LLM hype for spacecraft control 40:00 Radiation, single event upsets, and designing space-safe AI systems with bounded behavior 45:00 AI, physics-based world models, and autonomy as the key to scaling space operations 50:00 Manufacturing constraints, space supply chains, and lessons from rocket engine software 55:00 The future of space startups, geopolitics, deterrence, and keeping space usable for humanityKey Insights1. Space Debris Removal as a Growing Economic Opportunity: Aaron Borger explains that orbital debris is becoming a critical problem with approximately 3,000-4,000 defunct satellites among the 15,000 total satellites in orbit. The company is developing robotic arms and AI-controlled spacecraft to capture other satellites for refueling, repair, debris removal, and even space station assembly. The economic case is compelling - it costs about $1 million for the ISS to maneuver around debris, so if their spacecraft can capture and remove multiple pieces of debris for less than that cost per piece, it becomes financially viable while addressing the growing space junk problem.2. Revolutionary AI Safety Methods Enable Space Robotics: Traditional NASA engineers have been reluctant to use AI for spacecraft control due to safety concerns, but Orbital Robotics has developed breakthrough methods combining reinforcement learning with traditional control systems that can mathematically prove the AI will behave safely. Their approach uses physics-based world models rather than pure data-driven learning, ensuring deterministic behavior and bounded operations. This represents a significant advancement over previous AI approaches that couldn't guarantee safe operation in the high-stakes environment of space.3. Vision for Space-Based Manufacturing and Resource Utilization: The long-term vision extends beyond debris removal to creating orbital recycling facilities that can break down captured satellites and rebuild them into new spacecraft using existing materials in orbit. Additionally, the company plans to harvest propellant from lunar ice, splitting it into hydrogen and oxygen for rocket fuel, which could kickstart a lunar economy by providing economic incentives for moon-based operations while supporting the growing satellite constellation infrastructure.4. Unique Space Technology Development Through Student Programs: Borger and his co-founder gained unprecedented experience by launching six AI-controlled robotic arms to space through NASA's student rocket programs while still undergraduates. These missions involved throwing and catching objects in microgravity using deep reinforcement learning trained in simulation and tested on Earth. This hands-on space experience is extremely rare and gave them practical knowledge that informed their current commercial venture.5. Hardware Challenges Require Innovative Engineering Solutions: Space presents unique technical challenges including radiation-induced single event upsets that can reset processors for up to 10 seconds, requiring "passive safe" trajectories that won't cause collisions even during system resets. Unlike traditional space companies that spend $100,000 on radiation-hardened processors, Orbital Robotics uses automotive-grade components made radiation-tolerant through smart software and electrical design, enabling cost-effective operations while maintaining safety.6. Space Manufacturing Supply Chain Constraints: The space industry faces significant manufacturing bottlenecks with 24-week lead times for space-grade components and limited suppliers serving multiple companies simultaneously. This creates challenges for scaling production - Orbital Robotics needs to manufacture 30 robotic arms per year within a few years. They've partnered with manufacturers who previously worked on Blue Origin's rocket engines to address these supply chain limitations and achieve the scale necessary for their ambitious deployment timeline.7. Emerging Space Economy Beyond Communications: While current commercial space activities focus primarily on communications satellites (with SpaceX Starlink holding 60% market share) and Earth observation, new sectors are emerging including AI data centers in space and orbital manufacturing. The convergence of AI, robotics, and space technology is enabling more sophisticated autonomous operations, from predictive maintenance of rocket engines using sensor data to complex orbital maneuvering and satellite servicing that was previously impossible with traditional control methods.























