Discover
Crazy Wisdom
Crazy Wisdom
Author: Stewart Alsop
Subscribed: 113Played: 6,110Subscribe
Share
© 2026 Stewart Alsop
Description
In his series "Crazy Wisdom," Stewart Alsop explores cutting-edge topics, particularly in the realm of technology, such as Urbit and artificial intelligence. Alsop embarks on a quest for meaning, engaging with others to expand his own understanding of reality and that of his audience. The topics covered in "Crazy Wisdom" are diverse, ranging from emerging technologies to spirituality, philosophy, and general life experiences. Alsop's unique approach aims to make connections between seemingly unrelated subjects, tying together ideas in unconventional ways.
529 Episodes
Reverse
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Larry Swanson, a knowledge architect, community builder, and host of the Knowledge Graph Insights podcast. They explore the relationship between knowledge graphs and ontologies, why these technologies matter in the age of AI, and how symbolic AI complements the current wave of large language models. The conversation traces the history of neuro-symbolic AI from its origins at Dartmouth in 1956 through the semantic web vision of Tim Berners-Lee, examining why knowledge architecture remains underappreciated despite being deployed at major enterprises like Netflix, Amazon, and LinkedIn. Swanson explains how RDF (Resource Description Framework) enables both machines and humans to work with structured knowledge in ways that relational databases can't, while Alsop shares his journey from knowledge management director to understanding the practical necessity of ontologies for business operations. They discuss the philosophical roots of the field, the separation between knowledge management practitioners and knowledge engineers, and why startups often overlook these approaches until scale demands them. You can find Larry's podcast at KGI.fm or search for Knowledge Graph Insights on Spotify and YouTube.Timestamps00:00 Introduction to Knowledge Graphs and Ontologies01:09 The Importance of Ontologies in AI04:14 Philosophy's Role in Knowledge Management10:20 Debating the Relevance of RDF15:41 The Distinction Between Knowledge Management and Knowledge Engineering21:07 The Human Element in AI and Knowledge Architecture25:07 Startups vs. Enterprises: The Knowledge Gap29:57 Deterministic vs. Probabilistic AI32:18 The Marketing of AI: A Historical Perspective33:57 The Role of Knowledge Architecture in AI39:00 Understanding RDF and Its Importance44:47 The Intersection of AI and Human Intelligence50:50 Future Visions: AI, Ontologies, and Human BehaviorKey Insights1. Knowledge Graphs Combine Structure and Instances Through Ontological Design. A knowledge graph is built using an ontology that describes a specific domain you want to understand or work with. It includes both an ontological description of the terrain—defining what things exist and how they relate to one another—and instances of those things mapped to real-world data. This combination of abstract structure and concrete examples is what makes knowledge graphs powerful for discovery, question-answering, and enabling agentic AI systems. Not everyone agrees on the precise definition, but this understanding represents the practical approach most knowledge architects use when building these systems.2. Ontology Engineering Has Deep Philosophical Roots That Inform Modern Practice. The field draws heavily from classical philosophy, particularly ontology (the nature of what you know), epistemology (how you know what you know), and logic. These thousands-year-old philosophical frameworks provide the rigorous foundation for modern knowledge representation. Living in Heidelberg surrounded by philosophers, Swanson has discovered how much of knowledge graph work connects upstream to these philosophical roots. This philosophical grounding becomes especially important during times when institutional structures are collapsing, as we need to create new epistemological frameworks for civilization—knowledge management and ontology become critical tools for restructuring how we understand and organize information.3. The Semantic Web Vision Aimed to Transform the Internet Into a Distributed Database. Twenty-five years ago, Tim Berners-Lee, Jim Hendler, and Ora Lassila published a landmark article in Scientific American proposing the semantic web. While Berners-Lee had already connected documents across the web through HTML and HTTP, the semantic web aimed to connect all the data—essentially turning the internet into a giant database. This vision led to the development of RDF (Resource Description Framework), which emerged from DARPA research and provides the technical foundation for building knowledge graphs and ontologies. The origin story involved solving simple but important problems, like disambiguating whether "Cook" referred to a verb, noun, or a person's name at an academic conference.4. Symbolic AI and Neural Networks Represent Complementary Approaches Like Fast and Slow Thinking. Drawing on Kahneman's "thinking fast and slow" framework, LLMs represent the "fast brain"—learning monsters that can process enormous amounts of information and recognize patterns through natural language interfaces. Symbolic AI and knowledge graphs represent the "slow brain"—capturing actual knowledge and facts that can counter hallucinations and provide deterministic, explainable reasoning. This complementarity is driving the re-emergence of neuro-symbolic AI, which combines both approaches. The fundamental distinction is that symbolic AI systems are deterministic and can be fully explained, while LLMs are probabilistic and stochastic, making them unsuitable for applications requiring absolute reliability, such as industrial robotics or pharmaceutical research.5. Knowledge Architecture Remains Underappreciated Despite Powering Major Enterprises. While machine learning engineers currently receive most of the attention and budget, knowledge graphs actually power systems at Netflix (the economic graph), Amazon (the product graph), LinkedIn, Meta, and most major enterprises. The technology has been described as "the most astoundingly successful failure in the history of technology"—the semantic web vision seemed to fail, yet more than half of web pages now contain RDF-formatted semantic markup through schema.org, and every major enterprise uses knowledge graph technology in the background. Knowledge architects remain underappreciated partly because the work is cognitively difficult, requires talking to people (which engineers often avoid), and most advanced practitioners have PhDs in computer science, logic, or philosophy.6. RDF's Simple Subject-Predicate-Object Structure Enables Meaning and Data Linking. Unlike relational databases that store data in tables with rows and columns, RDF uses the simplest linguistic structure: subject-predicate-object (like "Larry knows Stuart"). Each element has a unique URI identifier, which permits precise meaning and enables linked data across systems. This graph structure makes it much easier to connect data after the fact compared to navigating tabular structures in relational databases. On top of RDF sits an entire stack of technologies including schema languages, query languages, ontological languages, and constraints languages—everything needed to turn data into actionable knowledge. The goal is inferring or articulating knowledge from RDF-structured data.7. The Future Requires Decoupled Modular Architectures Combining Multiple AI Approaches. The vision for the future involves separation of concerns through microservices-like architectures where different systems handle what they do best. LLMs excel at discovering possibilities and generating lists, while knowledge graphs excel at articulating human-vetted, deterministic versions of that information that systems can reliably use. Every one of Swanson's 300 podcast interviews over ten years ultimately concludes that regardless of technology, success comes down to human beings, their behavior, and the cultural changes needed to implement systems. The assumption that we can simply eliminate people from processes misses that huma...
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop explores the complex world of context and knowledge graphs with guest Youssef Tharwat, the founder of NoodlBox who is building dot get for context. Their conversation spans from the philosophical nature of context and its crucial role in AI development, to the technical challenges of creating deterministic tools for software development. Tharwat explains how his product creates portable, versionable knowledge graphs from code repositories, leveraging the semantic relationships already present in programming languages to provide agents with better contextual understanding. They discuss the limitations of large context windows, the advantages of Rust for AI-assisted development, the recent Claude/Bun acquisition, and the broader geopolitical implications of the AI race between big tech companies and open-source alternatives. The conversation also touches on the sustainability of current AI business models and the potential for more efficient, locally-run solutions to challenge the dominance of compute-heavy approaches.For more information about NoodlBox and to join the beta, visit NoodlBox.io.Timestamps00:00 Stewart introduces Youssef Tharwat, founder of NoodlBox, building context management tools for programming05:00 Context as relevant information for reasoning; importance when hitting coding barriers10:00 Knowledge graphs enable semantic traversal through meaning vs keywords/files15:00 Deterministic vs probabilistic systems; why critical applications need 100% reliability20:00 CLI tool makes knowledge graphs portable, versionable artifacts with code repos25:00 Compiler front-ends, syntax trees, and Rust's superior feedback for AI-assisted coding30:00 Claude's Bun acquisition signals potential shift toward runtime compilation and graph-based context35:00 Open source vs proprietary models; user frustration with rate limits and subscription tactics40:00 Singularity path vs distributed sovereignty of developers building alternative architectures45:00 Global economics and why brute force compute isn't sustainable worldwide50:00 Corporate inefficiencies vs independent engineering; changing workplace dynamics55:00 February open beta for NoodlBox.io; vision for new development tool standardsKey Insights1. Context is semantic information that enables proper reasoning, and traditional LLM approaches miss the mark. Youssef defines context as the information you need to reason correctly about something. He argues that larger context windows don't scale because quality degrades with more input, similar to human cognitive limitations. This insight challenges the Silicon Valley approach of throwing more compute at the problem and suggests that semantic separation of information is more optimal than brute force methods.2. Code naturally contains semantic boundaries that can be modeled into knowledge graphs without LLM intervention. Unlike other domains where knowledge graphs require complex labeling, code already has inherent relationships like function calls, imports, and dependencies. Youssef leverages these existing semantic structures to automatically build knowledge graphs, making his approach deterministic rather than probabilistic. This provides the reliability that software development has historically required.3. Knowledge graphs can be made portable, versionable, and shareable as artifacts alongside code repositories. Youssef's vision treats context as a first-class citizen in version control, similar to how Git manages code. Each commit gets a knowledge graph snapshot, allowing developers to see conceptual changes over time and share semantic understanding with collaborators. This transforms context from an ephemeral concept into a concrete, manageable asset.4. The dependency problem in modern development can be solved through pre-indexed knowledge graphs of popular packages. Rather than agents struggling with outdated API documentation, Youssef pre-indexes popular npm packages into knowledge graphs that automatically integrate with developers' projects. This federated approach ensures agents understand exact APIs and current versions, eliminating common frustrations with deprecated methods and unclear documentation.5. Rust provides superior feedback loops for AI-assisted programming due to its explicit compiler constraints. Youssef rebuilt his tool multiple times in different languages, ultimately settling on Rust because its picky compiler provides constant feedback to LLMs about subtle issues. This creates a natural quality control mechanism that helps AI generate more reliable code, making Rust an ideal candidate for AI-assisted development workflows.6. The current AI landscape faces a fundamental tension between expensive centralized models and the need for global accessibility. The conversation reveals growing frustration with rate limiting and subscription costs from major providers like Claude and Google. Youssef believes something must fundamentally change because $200-300 monthly plans only serve a fraction of the world's developers, creating pressure for more efficient architectures and open alternatives.7. Deterministic tooling built on semantic understanding may provide a competitive advantage against probabilistic AI monopolies. While big tech companies pursue brute force scaling with massive data centers, Youssef's approach suggests that clever architecture using existing semantic structures could level the playing field. This represents a broader philosophical divide between the "singularity" path of infinite compute and the "disagreeably autistic engineer" path of elegant solutions that work locally and affordably.
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Adrian Martinca, founder of the Arc of Dreams and the Open Doors movements, as well as Kids Dreams Matter, to explore how artificial intelligence is fundamentally reshaping human consciousness and family structures. Their conversation spans from the karmic lessons of our technological age to practical frameworks for protecting children from what Martinca calls the "AI flood" - examining how AI functions as an alien intelligence that has become the primary caregiver for children through 10.5 hours of daily screen exposure, and discussing Martinca's vision for inverting our relationship with technology through collective dreams and family-centered data management systems. For those interested in learning more about Martinca's work to reshape humanity's relationship with AI, visit opendoorsmovement.org.Timestamps00:00 Introduction to Adrian Martinca00:17 The Future and Human Choice02:03 Generational Trauma and Its Impact05:19 Understanding Consciousness and Suffering09:11 AI, Social Media, and Emotional Manipulation20:03 The AI Nexus Point and National Security31:13 The Librarian Analogy: Understanding AI's Role39:28 The Arc: A Framework for Future Generations47:57 Empowering Children in an AI-Driven World57:15 Reclaiming Agency in the Age of AIKey Insights1. AI as Alien Intelligence, Not Artificial Intelligence: Martinca reframes AI as fundamentally alien rather than artificial, arguing that because it possesses knowledge no human could have (like knowing "every book in the library"), it should be treated as an immigrant that must be assimilated into society rather than governed. This alien intelligence already controls social media algorithms and is becoming the primary caregiver of children through 10.5 hours of daily screen time.2. The AI Nexus Point as National Security Risk: Modern warfare has shifted to information-based attacks where hostile nations can deploy millions of fake accounts to manipulate AI algorithms, influencing how real citizens are targeted with content. This creates a vulnerability where foreign powers can break apart family units and exhaust populations without traditional military engagement, making people too tired and divided to resist.3. Generational Trauma as the Foundation of Consciousness: Drawing from Kundalini philosophy, Martinca explains that the first layer of consciousness development begins with inherited generational trauma. Children absorb their parents' unresolved suffering unconsciously, creating patterns that shape their worldview. This makes families both the source of early wounds and the pathway to healing, as parents witness their trauma affecting those they love most.4. The Choice Between Fear-Based and Love-Based Futures: Despite appearing chaotic, our current moment represents a critical choice point where humanity can collectively decide to function as a family. The fundamental choice underlying all decisions is alleviating suffering for our children and loved ones, but technology has created reference-based choices driven by doubt and fear rather than genuine human values.5. Social Media's Scientific Method Problem: Current platforms use the scientific method to maximize engagement, but the only reliably measurable emotions through screens are doubt and fear because positive emotions like love and hope lead people to put their devices down and connect in person. This creates systems that systematically promote negative emotional states to maintain user attention and generate revenue.6. The Arc of Dreams as Collective Vision: Martinca proposes a new data management system where families challenge children to envision their ideal future as heroes, collecting these dreams to create a unified vision for humanity. This would shift from bureaucratic fund allocation to child-centered prioritization, using children's visions of reduced suffering to guide AI development and social policy.7. Agency vs. Overwhelm in the Information Age: While some people develop agency through AI exposure and become more capable, many others experience information overload leading to inaction, confusion, depression, and even suicide. The key intervention is reframing dreams from material outcomes to states of being, helping children maintain their sense of self and agency rather than becoming passive consumers of algorithmic content.
Stewart Alsop interviews Tomas Yu, CEO and founder of Turn-On Financial Technologies, on this episode of the Crazy Wisdom Podcast. They explore how Yu's company is revolutionizing the closed-loop payment ecosystem by creating a universal float system that allows gift card credits to be used across multiple merchants rather than being locked to a single business like Starbucks. The conversation covers the complexities of fintech regulation, the differences between open and closed loop payment systems, and Yu's unique background that combines Korean martial arts discipline with Mexican polo culture. They also dive into Yu's passion for polo, discussing the intimate relationship between rider and horse, the sport's elitist tendencies in different regions, and his efforts to build polo communities from El Paso to New Mexico. Find Tomas on LinkedIn under Tommy (TJ) Alvarez.Timestamps00:00 Introduction to TurnOn Technologies02:45 Understanding Float and Its Implications05:45 Decentralized Gift Card System08:39 Navigating the FinTech Landscape11:19 The Role of Merchants and Consumers14:15 Challenges in the Gift Card Market17:26 The Future of Payment Systems23:12 Understanding Payment Systems: Stripe and POS26:47 Regulatory Landscape: KYC and AML in Payments27:55 The Impact of Economic Conditions on Financial Systems36:39 Transitioning from Industrial to Information Age Finance38:18 Curiosity and Resourcefulness in the Information Age45:09 Social Media and the Dynamics of Attention46:26 From Restaurant to Polo: A Journey of Mentorship49:50 The Thrill of Polo: Learning and Obsession54:53 Building a Team: Breaking Elitism in Polo01:00:29 The Unique Bond: Understanding the Horse-Rider Relationship01:05:21 Polo Horses: Choosing the Right Breed for the GameKey Insights1. Turn-On Technologies is revolutionizing payment systems through behavioral finance by creating a decentralized "float" system. Unlike traditional gift cards that lock customers into single merchants like Starbucks, Turn-On allows universal credit that works across their entire merchant ecosystem. This addresses the massive gift card market where companies like Starbucks hold billions in customer funds that can only be used at their locations.2. The financial industry operates on an exclusionary "closed loop" versus "open loop" system that creates significant friction and fees. Closed loop systems keep money within specific ecosystems without conversion to cash, while open loop systems allow cash withdrawal but trigger heavy regulation. Every transaction through traditional payment processors like Stripe can cost merchants 3-8% in fees, representing a massive burden on businesses.3. Point-of-sale systems function as the financial bloodstream and credit scoring mechanism for businesses. These systems track all card transactions and serve as the primary data source for merchant lending decisions. The gap between POS records and bank deposits reveals cash transactions that businesses may not be reporting, making POS data crucial for assessing business creditworthiness and loan risk.4. Traditional FinTech professionals often miss obvious opportunities due to ego and institutional thinking. Yu encountered resistance from established FinTech experts who initially dismissed his gift card-focused approach, despite the trillion-dollar market size. The financial industry's complexity is sometimes artificially maintained to exclude outsiders rather than serve genuine regulatory purposes.5. The information age is creating a fundamental divide between curious, resourceful individuals and those stuck in credentialist systems. With AI and LLMs amplifying human capability, people who ask the right questions and maintain curiosity will become exponentially more effective. Meanwhile, those relying on traditional credentials without underlying curiosity will fall further behind, creating unprecedented economic and social divergence.6. Polo serves as a powerful business metaphor and relationship-building tool that mirrors modern entrepreneurial challenges. Like mixed martial arts evolved from testing individual disciplines, business success now requires being competent across multiple areas rather than excelling in just one specialty. The sport also creates unique networking opportunities and teaches valuable lessons about partnership between human and animal.7. International financial systems reveal how governments use complexity and capital controls to maintain power over citizens. Yu's observations about Argentina's financial restrictions and the prevalence of cash economies in Latin America illustrate how regulatory complexity often serves political rather than protective purposes, creating opportunities for alternative financial systems that provide genuine value to users.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Dima Zhelezov, a philosopher at SQD.ai, to explore the fascinating intersections of cryptocurrency, AI, quantum physics, and the future of human knowledge. The conversation covers everything from Zhelezov's work building decentralized data lakes for blockchain data to deep philosophical questions about the nature of mathematical beauty, the Renaissance ideal of curiosity-driven learning, and whether AI agents will eventually develop their own form of consciousness. Stewart and Dima examine how permissionless databases are making certain activities "unenforceable" rather than illegal, the paradox of mathematics' incredible accuracy in describing the physical world, and why we may be entering a new Renaissance era where curiosity becomes humanity's most valuable skill as AI handles traditional tasks.You can find more about Dima's work at SQD.ai and follow him on X at @dizhel.Timestamps00:00 Introduction to Decentralized Data Lakes02:55 The Evolution of Blockchain Data Management05:55 The Intersection of Blockchain and Traditional Databases08:43 The Role of AI in Transparency and Control11:51 AI Autonomy and Human Interaction15:05 Curiosity in the Age of AI17:54 The Renaissance of Knowledge and Learning20:49 Mathematics, Beauty, and Discovery27:30 The Evolution of Mathematical Thought30:28 Quantum Mechanics and Mathematical Predictions33:43 The Search for a Unified Theory38:57 The Role of Gravity in Physics41:23 The Shift from Physics to Biology46:19 The Future of Human Interaction in a Digital AgeKey Insights1. Blockchain as a Permissionless Database Solution - Traditional blockchains were designed for writing transactions but not efficiently reading data. Dima's company SQD.ai built a decentralized data lake that maintains blockchain's key properties (open read/write access, verifiable, no registration required) while solving the database problem. This enables applications like Polymarket to exist because there's "no one to subpoena" - the permissionless nature makes enforcement impossible even when activities might be regulated in traditional systems.2. The Convergence of On-Chain and Off-Chain Data - The future won't have distinct "blockchain applications" versus traditional apps. Instead, we'll see seamless integration where users don't even know they're using blockchain technology. The key differentiator is that blockchain provides open read and write access without permission, which becomes essential when touching financial or politically sensitive applications that governments might try to shut down through traditional centralized infrastructure.3. AI Autonomy and the Illusion of Control - We're rapidly approaching full autonomy of AI agents that can transact and analyze information independently through blockchain infrastructure. While humans still think anthropocentrically about AI as companions or tools, these systems may develop consciousness or motivations completely alien to human understanding. This creates a dangerous "illusion of control" where we can operationalize AI systems without truly comprehending their decision-making processes.4. Curiosity as the Essential Future Skill - In a world of infinite knowledge and AI capabilities, curiosity becomes the primary limiting factor for human progress. Traditional hard and soft skills will be outsourced to AI, making the ability to ask good questions and pursue interests through Socratic dialogue with AI the most valuable human capacity. This mirrors the Renaissance ideal of the polymath, now enabled by AI that allows non-linear exploration of knowledge rather than traditional linear textbook learning.5. The Beauty Principle in Mathematical Discovery - Mathematics exhibits an "unreasonable effectiveness" where theories developed purely abstractly turn out to predict real-world phenomena with extraordinary accuracy. Quantum chromodynamics, developed through mathematical beauty and elegance, can predict particle physics experiments to incredible precision. This suggests either mathematical truths exist independently for AI to discover, or that aesthetic principles may be fundamental organizing forces in the universe.6. The Physics Plateau and Biological Shift - Modern physics faces a unique problem where the Standard Model works too well - it explains everything we can currently measure except gravity, but we can't create experiments to test the edge cases where the theory should break down. This has led to a decline in physics prominence since the 1960s, with scientific excitement shifting toward biology and, now, AI and crypto, where breakthrough discoveries remain accessible.7. Two Divergent Futures: Abundance vs. Dystopia - We face a stark choice between two AI futures: a super-abundant world where AI eliminates scarcity and humans pursue curiosity, beauty, and genuine connection; or a dystopian scenario where 0.01% capture all AI-generated value while everyone else survives on UBI, becoming "degraded to zombies" providing content for AI models. The outcome depends on whether we prioritize human flourishing or power concentration during this critical technological transition.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop welcomes Roni Burd, a data and AI executive with extensive experience at Amazon and Microsoft, for a deep dive into the evolving landscape of data management and artificial intelligence in enterprise environments. Their conversation explores the longstanding challenges organizations face with knowledge management and data architecture, from the traditional bronze-silver-gold data processing pipeline to how AI agents are revolutionizing how people interact with organizational data without needing SQL or Python expertise. Burd shares insights on the economics of AI implementation at scale, the debate between one-size-fits-all models versus specialized fine-tuned solutions, and the technical constraints that prevent companies like Apple from upgrading services like Siri to modern LLM capabilities, while discussing the future of inference optimization and the hundreds-of-millions-of-dollars cost barrier that makes architectural experimentation in AI uniquely expensive compared to other industries.Timestamps00:00 Introduction to Data and AI Challenges03:08 The Evolution of Data Management05:54 Understanding Data Quality and Metadata08:57 The Role of AI in Data Cleaning11:50 Knowledge Management in Large Organizations14:55 The Future of AI and LLMs17:59 Economics of AI Implementation29:14 The Importance of LLMs for Major Tech Companies32:00 Open Source: Opportunities and Challenges35:19 The Future of AI Inference and Hardware43:24 Optimizing Inference: The Next Frontier49:23 The Commercial Viability of AI ModelsKey Insights1. Data Architecture Evolution: The industry has evolved through bronze-silver-gold data layers, where bronze is raw data, silver is cleaned/processed data, and gold is business-ready datasets. However, this creates bottlenecks as stakeholders lose access to original data during the cleaning process, making metadata and data cataloging increasingly critical for organizations.2. AI Democratizing Data Access: LLMs are breaking down technical barriers by allowing business users to query data in plain English without needing SQL, Python, or dashboarding skills. This represents a fundamental shift from requiring intermediaries to direct stakeholder access, though the full implications remain speculative.3. Economics Drive AI Architecture Decisions: Token costs and latency requirements are major factors determining AI implementation. Companies like Meta likely need their own models because paying per-token for billions of social media interactions would be economically unfeasible, driving the need for self-hosted solutions.4. One Model Won't Rule Them All: Despite initial hopes for universal models, the reality points toward specialized models for different use cases. This is driven by economics (smaller models for simple tasks), performance requirements (millisecond response times), and industry-specific needs (medical, military terminology).5. Inference is the Commercial Battleground: The majority of commercial AI value lies in inference rather than training. Current GPUs, while specialized for graphics and matrix operations, may still be too general for optimal inference performance, creating opportunities for even more specialized hardware.6. Open Source vs Open Weights Distinction: True open source in AI means access to architecture for debugging and modification, while "open weights" enables fine-tuning and customization. This distinction is crucial for enterprise adoption, as open weights provide the flexibility companies need without starting from scratch.7. Architecture Innovation Faces Expensive Testing Loops: Unlike database optimization where query plans can be easily modified, testing new AI architectures requires expensive retraining cycles costing hundreds of millions of dollars. This creates a potential innovation bottleneck, similar to aerospace industries where testing new designs is prohibitively expensive.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Kelvin Lwin for their second conversation exploring the fascinating intersection of AI and Buddhist cosmology. Lwin brings his unique perspective as both a technologist with deep Silicon Valley experience and a serious meditation practitioner who's spent decades studying Buddhist philosophy. Together, they examine how AI development fits into ancient spiritual prophecies, discuss the dangerous allure of LLMs as potentially "asura weapons" that can mislead users, and explore verification methods for enlightenment claims in our modern digital age. The conversation ranges from technical discussions about the need for better AI compilers and world models to profound questions about humanity's role in what Lwin sees as an inevitable technological crucible that will determine our collective spiritual evolution. For more information about Kelvin's work on attention training and AI, visit his website at alin.ai. You can also join Kelvin for live meditation sessions twice daily on Clubhouse at clubhouse.com/house/neowise.Timestamps00:00 Exploring AI and Spirituality05:56 The Quest for Enlightenment Verification11:58 AI's Impact on Spirituality and Reality17:51 The 500-Year Prophecy of Buddhism23:36 The Future of AI and Business Innovation32:15 Exploring Language and Communication34:54 Programming Languages and Human Interaction36:23 AI and the Crucible of Change39:20 World Models and Physical AI41:27 The Role of Ontologies in AI44:25 The Asura and Deva: A Battle for Supremacy48:15 The Future of Humanity and AI51:08 Persuasion and the Power of LLMs55:29 Navigating the New Age of TechnologyKey Insights1. The Rarity of Polymath AI-Spirituality Perspectives: Kelvin argues that very few people are approaching AI through spiritual frameworks because it requires being a polymath with deep knowledge across multiple domains. Most people specialize in one field, and combining AI expertise with Buddhist cosmology requires significant time, resources, and academic background that few possess.2. Traditional Enlightenment Verification vs. Modern Claims: There are established methods for verifying enlightenment claims in Buddhist traditions, including adherence to the five precepts and overcoming hell rebirth through karmic resolution. Many modern Western practitioners claiming enlightenment fail these traditional tests, often changing the criteria when they can't meet the original requirements.3. The 500-Year Buddhist Prophecy and Current Timing: We are approximately 60 years into a prophesied 500-year period where enlightenment becomes possible again. This "startup phase of Buddhism revival" coincides with technological developments like the internet and AI, which are seen as integral to this spiritual renaissance rather than obstacles to it.4. LLMs as UI Solution, Not Reasoning Engine: While LLMs have solved the user interface problem of capturing human intent, they fundamentally cannot reason or make decisions due to their token-based architecture. The technology works well enough to create illusion of capability, leading people down an asymptotic path away from true solutions.5. The Need for New Programming Paradigms: Current AI development caters too much to human cognitive limitations through familiar programming structures. True advancement requires moving beyond human-readable code toward agent-generated languages that prioritize efficiency over human comprehension, similar to how compilers already translate high-level code.6. AI as Asura Weapon in Spiritual Warfare: From Buddhist cosmological perspective, AI represents an asura (demon-realm) tool that appears helpful but is fundamentally wasteful and disruptive to human consciousness. Humanity exists as the battleground between divine and demonic forces, with AI serving as a weapon that both sides employ in this cosmic conflict.7. 2029 as Critical Convergence Point: Multiple technological and spiritual trends point toward 2029 as when various systems will reach breaking points, forcing humanity to either transcend current limitations or be consumed by them. This timing aligns with both technological development curves and spiritual prophecies about transformation periods.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Daniel Bar, co-founder of Space Computer, a satellite-based secure compute protocol that creates a "root of trust in space" using tamper-resistant hardware for cryptographic applications. The conversation explores the fascinating intersection of space technology, blockchain infrastructure, and trusted execution environments (TEEs), touching on everything from cosmic radiation-powered random number generators to the future of space-based data centers and Daniel's journey from quantum computing research to building what they envision as the next evolution beyond Ethereum's "world computer" concept. For more information about Space Computer, visit spacecomputer.io, and check out their new podcast "Frontier Pod" on the Space Computer YouTube channel.Timestamps00:00 Introduction to Space Computer02:45 Understanding Layer 1 and Layer 2 in Space Computing06:04 Trusted Execution Environments in Space08:45 The Evolution of Trusted Execution Environments11:59 The Role of Blockchain in Space Computing14:54 Incentivizing Satellite Deployment17:48 The Future of Space Computing and Its Applications20:58 Radiation Hardening and Space Environment Challenges23:45 Kardashev Civilizations and the Future of Energy26:34 Quantum Computing and Its Implications29:49 The Intersection of Quantum and Crypto32:26 The Future of Space Computer and Its VisionKey Insights1. Space-based data centers solve the physical security problem for Trusted Execution Environments (TEEs). While TEEs provide secure compute through physical isolation, they remain vulnerable to attacks requiring physical access - like electron microscope forensics to extract secrets from chips. By placing TEEs in space, these attack vectors become practically impossible, creating the highest possible security guarantees for cryptographic applications.2. The space computer architecture uses a hybrid layer approach with space-based settlement and earth-based compute. The layer 1 blockchain operates in space as a settlement layer and smart contract platform, while layer 2 solutions on earth provide high-performance compute. This design leverages space's security advantages while compensating for the bandwidth and compute constraints of orbital infrastructure through terrestrial augmentation.3. True randomness generation becomes possible through cosmic radiation harvesting. Unlike pseudo-random number generators used in most blockchain applications today, space-based systems can harvest cosmic radiation as a genuinely stochastic process. This provides pure randomness critical for cryptographic applications like block producer selection, eliminating the predictability issues that compromise security in earth-based random number generation.4. Space compute migration is inevitable as humanity advances toward Kardashev Type 1 civilization. The progression toward planetary-scale energy control requires space-based infrastructure including solar collection, orbital cities, and distributed compute networks. This technological evolution makes space-based data centers not just viable but necessary for supporting the scale of computation required for advanced civilization development.5. The optimal use case for space compute is high-security applications rather than general data processing. While space-based data centers face significant constraints including 40kg of peripheral infrastructure per kg of compute, maintenance impossibility, and 5-year operational lifespans, these limitations become acceptable when the application requires maximum security guarantees that only space-based isolation can provide.6. Space computer will evolve from centralized early-stage operation to a decentralized satellite constellation. Similar to early Ethereum's foundation-operated nodes, space computer currently runs trusted operations but aims to enable public participation through satellite ownership stakes. Future participants could fractionally own satellites providing secure compute services, creating economic incentives similar to Bitcoin mining pools or Ethereum staking.7. Blockchain represents a unique compute platform that meshes hardware, software, and free market activity. Unlike traditional computers with discrete inputs and outputs, blockchain creates an organism where market participants provide inputs through trading, lending, and other economic activities, while the distributed network processes and returns value through the same market mechanisms, creating a cyborg-like integration of technology and economics.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Peter Schmidt Nielsen, who is building FPGA-accelerated servers at Saturn Data. The conversation explores why servers need FPGAs, how these field-programmable gate arrays work as "IO expanders" for massive memory bandwidth, and why they're particularly well-suited for vector database and search applications. Peter breaks down the technical realities of FPGAs - including why they "really suck" in many ways compared to GPUs and CPUs - while explaining how his company is leveraging them to provide terabyte-per-second bandwidth to 1.3 petabytes of flash storage. The discussion ranges from distributed systems challenges and the CAP theorem to the hardware-software relationship in modern computing, offering insights into both the philosophical aspects of search technology and the nuts-and-bolts engineering of memory controllers and routing fabrics.For more information about Peter's work, you can reach him on Twitter at @PTRSCHMDTNLSN or find his website at saturndata.com.Timestamps00:00 Introduction to FPGAs and Their Role in Servers02:47 Understanding FPGA Limitations and Use Cases05:55 Exploring Different Types of Servers08:47 The Importance of Memory and Bandwidth11:52 Philosophical Insights on Search and Access Patterns14:50 The Relationship Between Hardware and Search Queries17:45 Challenges of Distributed Systems20:47 The CAP Theorem and Its Implications23:52 The Evolution of Technology and Knowledge Management26:59 FPGAs as IO Expanders29:35 The Trade-offs of FPGAs vs. ASICs and GPUs32:55 The Future of AI Applications with FPGAs35:51 Exciting Developments in Hardware and BusinessKey Insights1. FPGAs are fundamentally "crappy ASICs" with serious limitations - Despite being programmable hardware, FPGAs perform far worse than general-purpose alternatives in most cases. A $100,000 high-end FPGA might only match the memory bandwidth of a $600 gaming GPU. They're only valuable for specific niches like ultra-low latency applications or scenarios requiring massive parallel I/O operations, making them unsuitable for most computational workloads where CPUs and GPUs excel.2. The real value of FPGAs lies in I/O expansion, not computation - Rather than using FPGAs for their processing power, Saturn Data leverages them primarily as cost-effective ways to access massive amounts of DRAM controllers and NVMe interfaces. Their server design puts 200 FPGAs in a 2U enclosure with 1.3 petabytes of flash storage and terabyte-per-second read bandwidth, essentially using FPGAs as sophisticated I/O expanders.3. Access patterns determine hardware performance more than raw specs - The way applications access data fundamentally determines whether specialized hardware will provide benefits. Applications that do sparse reads across massive datasets (like vector databases) benefit from Saturn Data's architecture, while those requiring dense computation or frequent inter-node communication are better served by traditional hardware. Understanding these patterns is crucial for matching workloads to appropriate hardware.4. Distributed systems complexity stems from failure tolerance requirements - The difficulty of distributed systems isn't inherent but depends on what failures you need to tolerate. Simple approaches that restart on any failure are easy but unreliable, while Byzantine fault tolerance (like Bitcoin) is extremely complex. Most practical systems, including banks, find middle ground by accepting occasional unavailability rather than trying to achieve perfect consistency, availability, and partition tolerance simultaneously.5. Hardware specialization follows predictable cycles of generalization and re-specialization - Computing hardware consistently follows "Makimoto's Wave" - specialized hardware becomes more general over time, then gets leapfrogged by new specialized solutions. CPUs became general-purpose, GPUs evolved from fixed graphics pipelines to programmable compute, and now companies like Etched are creating transformer-specific ASICs. This cycle repeats as each generation adds programmability until someone strips it away for performance gains.6. Memory bottlenecks are reshaping the hardware landscape - The AI boom has created severe memory shortages, doubling costs for DRAM components overnight. This affects not just GPU availability but creates opportunities for alternative architectures. When everyone faces higher memory costs, the relative premium for specialized solutions like FPGA-based systems becomes more attractive, potentially shifting the competitive landscape for memory-intensive applications.7. Search applications represent ideal FPGA use cases due to their sparse access patterns - Vector databases and search workloads are particularly well-suited to FPGA acceleration because they involve searching through massive datasets with sparse access patterns rather than dense computation. These applications can effectively utilize the high bandwidth to flash storage and parallel I/O capabilities that FPGAs provide, making them natural early adopters for this type of specialized hardware architecture.
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop interviews Aurelio Gialluca, an economist and full stack data professional who works across finance, retail, and AI as both a data engineer and machine learning developer, while also exploring human consciousness and psychology. Their wide-ranging conversation covers the intersection of science and psychology, the unique cultural characteristics that make Argentina a haven for eccentrics (drawing parallels to the United States), and how Argentine culture has produced globally influential figures from Borges to Maradona to Che Guevara. They explore the current AI landscape as a "centralizing force" creating cultural homogenization (particularly evident in LinkedIn's cookie-cutter content), discuss the potential futures of AI development from dystopian surveillance states to anarchic chaos, and examine how Argentina's emotionally mature, non-linear communication style might offer insights for navigating technological change. The conversation concludes with Gialluca describing his ambitious project to build a custom water-cooled workstation with industrial-grade processors for his quantitative hedge fund, highlighting the practical challenges of heat management and the recent tripling of RAM prices due to market consolidation.Timestams00:00 Exploring the Intersection of Psychology and Science02:55 Cultural Eccentricity: Argentina vs. the United States05:36 The Influence of Religion on National Identity08:50 The Unique Argentine Cultural Landscape11:49 Soft Power and Cultural Influence14:48 Political Figures and Their Cultural Impact17:50 The Role of Sports in Shaping National Identity20:49 The Evolution of Argentine Music and Subcultures23:41 AI and the Future of Cultural Dynamics26:47 Navigating the Chaos of AI in Culture33:50 Equilibrating Society for a Sustainable Future35:10 The Patchwork Age: Decentralization and Society35:56 The Impact of AI on Human Connection38:06 Individualism vs. Collective Rules in Society39:26 The Future of AI and Global Regulations40:16 Biotechnology: The Next Frontier42:19 Building a Personal AI Lab45:51 Tiers of AI Labs: From Personal to Industrial48:35 Mathematics and AI: The Foundation of Innovation52:12 Stochastic Models and Predictive Analytics55:47 Building a Supercomputer: Hardware InsightsKey Insights1. Argentina's Cultural Exceptionalism and Emotional Maturity: Argentina stands out globally for allowing eccentrics to flourish and having a non-linear communication style that Gialluca describes as "non-monotonous systems." Argentines can joke profoundly and be eccentric while simultaneously being completely organized and straightforward, demonstrating high emotional intelligence and maturity that comes from their unique cultural blend of European romanticism and Latino lightheartedness.2. Argentina as an Underrecognized Cultural Superpower: Despite being introverted about their achievements, Argentina produces an enormous amount of global culture through music, literature, and iconic figures like Borges, Maradona, Messi, and Che Guevara. These cultural exports have shaped entire generations worldwide, with Argentina "stealing the thunder" from other nations and creating lasting soft power influence that people don't fully recognize as Argentine.3. AI's Cultural Impact Follows Oscillating Patterns: Culture operates as a dynamic system that oscillates between centralization and decentralization like a sine wave. AI currently represents a massive centralizing force, as seen in LinkedIn's homogenized content, but this will inevitably trigger a decentralization phase. The speed of this cultural transformation has accelerated dramatically, with changes that once took generations now happening in years.4. The Coming Bifurcation of AI Futures: Gialluca identifies two extreme possible endpoints for AI development: complete centralized control (the "Mordor" scenario with total surveillance) or complete chaos where everyone has access to dangerous capabilities like creating weapons or viruses. Finding a middle path between these extremes is essential for society's survival, requiring careful equilibrium between accessibility and safety.5. Individual AI Labs Are Becoming Democratically Accessible: Gialluca outlines a tier system for AI capabilities, where individuals can now build "tier one" labs capable of fine-tuning models and processing massive datasets for tens of thousands of dollars. This democratization means that capabilities once requiring teams of PhD scientists can now be achieved by dedicated individuals, fundamentally changing the landscape of AI development and access.6. Hardware Constraints Are the New Limiting Factor: While AI capabilities are rapidly advancing, practical implementation is increasingly constrained by hardware availability and cost. RAM prices have tripled in recent months, and the challenge of managing enormous heat output from powerful processors requires sophisticated cooling systems. These physical limitations are becoming the primary bottleneck for individual AI development.7. Data Quality Over Quantity Is the Critical Challenge: The main bottleneck for AI advancement is no longer energy or GPUs, but high-quality data for training. Early data labeling efforts produced poor results because labelers lacked domain expertise. The future lies in reinforcement learning (RL) environments where AI systems can generate their own high-quality training data, representing a fundamental shift in how AI systems learn and develop.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Josh Halliday, who works on training super intelligence with frontier data at Turing. The conversation explores the fascinating world of reinforcement learning (RL) environments, synthetic data generation, and the crucial role of high-quality human expertise in AI training. Josh shares insights from his years working at Unity Technologies building simulated environments for everything from oil and gas safety scenarios to space debris detection, and discusses how the field has evolved from quantity-focused data collection to specialized, expert-verified training data that's becoming the key bottleneck in AI development. They also touch on the philosophical implications of our increasing dependence on AI technology and the emerging job market around AI training and data acquisition.Timestamps00:00 Introduction to AI and Reinforcement Learning03:12 The Evolution of AI Training Data05:59 Gaming Engines and AI Development08:51 Virtual Reality and Robotics Training11:52 The Future of Robotics and AI Collaboration14:55 Building Applications with AI Tools17:57 The Philosophical Implications of AI20:49 Real-World Workflows and RL Environments26:35 The Impact of Technology on Human Cognition28:36 Cultural Resistance to AI and Data Collection31:12 The Bottleneck of High-Quality Data in AI32:57 Philosophical Perspectives on Data35:43 The Future of AI Training and Human Collaboration39:09 The Role of Subject Matter Experts in Data Quality43:20 The Evolution of Work in the Age of AI46:48 Convergence of AI and Human ExperienceKey Insights1. Reinforcement Learning environments are sophisticated simulations that replicate real-world enterprise workflows and applications. These environments serve as training grounds for AI agents by creating detailed replicas of tools like Salesforce, complete with specific tasks and verification systems. The agent attempts tasks, receives feedback on failures, and iterates until achieving consistent success rates, effectively learning through trial and error in a controlled digital environment.2. Gaming engines like Unity have evolved into powerful platforms for generating synthetic training data across diverse industries. From oil and gas companies needing hazardous scenario data to space intelligence firms tracking orbital debris, these real-time 3D engines with advanced physics can create high-fidelity simulations that capture edge cases too dangerous or expensive to collect in reality, bridging the gap where real-world data falls short.3. The bottleneck in AI development has fundamentally shifted from data quantity to data quality. The industry has completely reversed course from the previous "scale at all costs" approach to focusing intensively on smaller, higher-quality datasets curated by subject matter experts. This represents a philosophical pivot toward precision over volume in training next-generation AI systems.4. Remote teleoperation through VR is creating a new global workforce for robotics training. Workers wearing VR headsets can remotely control humanoid robots across the globe, teaching them tasks through direct demonstration. This creates opportunities for distributed talent while generating the nuanced human behavioral data needed to train autonomous systems.5. Human expertise remains irreplaceable in the AI training pipeline despite advancing automation. Subject matter experts provide crucial qualitative insights that go beyond binary evaluations, offering the contextual "why" and "how" that transforms raw data into meaningful training material. The challenge lies in identifying, retaining, and properly incentivizing these specialists as demand intensifies.6. First-person perspective data collection represents the frontier of human-like AI training. Companies are now paying people to life-log their daily experiences, capturing petabytes of egocentric data to train models more similarly to how human children learn through constant environmental observation, rather than traditional batch-processing approaches.7. The convergence of simulation, robotics, and AI is creating unprecedented philosophical and practical challenges. As synthetic worlds become indistinguishable from reality and AI agents gain autonomy, we're entering a phase where the boundaries between digital and physical, human and artificial intelligence, become increasingly blurred, requiring careful consideration of dependency, agency, and the preservation of human capabilities.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop interviews Marcin Dymczyk, CPO and co-founder of SevenSense Robotics, exploring the fascinating world of advanced robotics and AI. Their conversation covers the evolution from traditional "standard" robotics with predetermined pathways to advanced robotics that incorporates perception, reasoning, and adaptability - essentially the AGI of physical robotics. Dymczyk explains how his company builds "the eyes and brains of mobile robots" using camera-based autonomy algorithms, drawing parallels between robot sensing systems and human vision, inner ear balance, and proprioception. The discussion ranges from the technical challenges of sensor fusion and world models to broader topics including robotics regulation across different countries, the role of federalism in innovation, and how recent geopolitical changes are driving localized high-tech development, particularly in defense applications. They also touch on the democratization of robotics for small businesses and the philosophical implications of increasingly sophisticated AI systems operating in physical environments. To learn more about SevenSense, visit www.sevensense.ai.Check out this GPT we trained on the conversationTimestamps00:00 Introduction to Robotics and Personal Journey05:27 The Evolution of Robotics: From Standard to Advanced09:56 The Future of Robotics: AI and Automation12:09 The Role of Edge Computing in Robotics17:40 FPGA and AI: The Future of Robotics Processing21:54 Sensing the World: How Robots Perceive Their Environment29:01 Learning from the Physical World: Insights from Robotics33:21 The Intersection of Robotics and Manufacturing35:01 Journey into Robotics: Education and Passion36:41 Practical Robotics Projects for Beginners39:06 Understanding Particle Filters in Robotics40:37 World Models: The Future of AI and Robotics41:51 The Black Box Dilemma in AI and Robotics44:27 Safety and Interpretability in Autonomous Systems49:16 Regulatory Challenges in Robotics and AI51:19 Global Perspectives on Robotics Regulation54:43 The Future of Robotics in Emerging Markets57:38 The Role of Engineers in Modern WarfareKey Insights1. Advanced robotics transcends traditional programming through perception and intelligence. Dymczyk distinguishes between standard robotics that follows rigid, predefined pathways and advanced robotics that incorporates perception and reasoning. This evolution enables robots to make autonomous decisions about navigation and task execution, similar to how humans adapt to unexpected situations rather than following predetermined scripts.2. Camera-based sensing systems mirror human biological navigation. SevenSense Robotics builds "eyes and brains" for mobile robots using multiple cameras (up to eight), IMUs (accelerometers/gyroscopes), and wheel encoders that parallel human vision, inner ear balance, and proprioception. This redundant sensing approach allows robots to navigate even when one system fails, such as operating in dark environments where visual sensors are compromised.3. Edge computing dominates industrial robotics due to connectivity and security constraints. Many industrial applications operate in environments with poor connectivity (like underground grocery stores) or require on-premise solutions for confidentiality. This necessitates powerful local processing capabilities rather than cloud-dependent AI, particularly in automotive factories where data security about new models is paramount.4. Safety regulations create mandatory "kill switches" that bypass AI decision-making. European and US regulatory bodies require deterministic safety systems that can instantly stop robots regardless of AI reasoning. These systems operate like human reflexes, providing immediate responses to obstacles while the main AI brain handles complex navigation and planning tasks.5. Modern robotics development benefits from increasingly affordable optical sensors. The democratization of 3D cameras, laser range finders, and miniature range measurement chips (costing just a few dollars from distributors like DigiKey) enables rapid prototyping and innovation that was previously limited to well-funded research institutions.6. Geopolitical shifts are driving localized high-tech development, particularly in defense applications. The changing role of US global leadership and lessons from Ukraine's drone warfare are motivating countries like Poland to develop indigenous robotics capabilities. Small engineering teams can now create battlefield-effective technology using consumer drones equipped with advanced sensors.7. The future of robotics lies in natural language programming for non-experts. Dymczyk envisions a transformation where small business owners can instruct robots using conversational language rather than complex programming, similar to how AI coding assistants now enable non-programmers to build applications through natural language prompts.
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Mike Bakon to explore the fascinating intersection of hardware hacking, blockchain technology, and decentralized systems. Their conversation spans from Mike's childhood fascination with taking apart electronics in 1980s Poland to his current work with ESP32 microcontrollers, LoRa mesh networks, and Cardano blockchain development. They discuss the technical differences between UTXO and account-based blockchains, the challenges of true decentralization versus hybrid systems, and how AI tools are changing the development landscape. Mike shares his vision for incentivizing mesh networks through blockchain technology and explains why he believes mass adoption of decentralized systems will come through abstraction rather than technical education. The discussion also touches on the potential for creating new internet infrastructure using ad hoc mesh networks and the importance of maintaining truly decentralized, permissionless systems in an increasingly surveilled world. You can find Mike in Twitter as @anothervariable.Check out this GPT we trained on the conversationTimestamps00:00 Introduction to Hardware and Early Experiences02:59 The Evolution of AI in Hardware Development05:56 Decentralization and Blockchain Technology09:02 Understanding UTXO vs Account-Based Blockchains11:59 Smart Contracts and Their Functionality14:58 The Importance of Decentralization in Blockchain17:59 The Process of Data Verification in Blockchain20:48 The Future of Blockchain and Its Applications34:38 Decentralization and Trustless Systems37:42 Mainstream Adoption of Blockchain39:58 The Role of Currency in Blockchain43:27 Interoperability vs Bridging in Blockchain47:27 Exploring Mesh Networks and LoRa Technology01:00:25 The Future of AI and DecentralizationKey Insights1. Hardware curiosity drives innovation from childhood - Mike's journey into hardware began as a child in 1980s Poland, where he would disassemble toys like battery-powered cars to understand how they worked. This natural curiosity about taking things apart and understanding their inner workings laid the foundation for his later expertise in microcontrollers like the ESP32 and his deep understanding of both hardware and software integration.2. AI as a research companion, not a replacement for coding - Mike uses AI and LLMs primarily as research tools and coding companions rather than letting them write entire applications. He finds them invaluable for getting quick answers to coding problems, analyzing Git repositories, and avoiding the need to search through Stack Overflow, but maintains anxiety when AI writes whole functions, preferring to understand and write his own code.3. Blockchain decentralization requires trustless consensus verification - The fundamental difference between blockchain databases and traditional databases lies in the consensus process that data must go through before being recorded. Unlike centralized systems where one entity controls data validation, blockchains require hundreds of nodes to verify each block through trustless consensus mechanisms, ensuring data integrity without relying on any single authority.4. UTXO vs account-based blockchains have fundamentally different architectures - Cardano uses an extended UTXO model (like Bitcoin but with smart contracts) where transactions consume existing UTXOs and create new ones, keeping the ledger lean. Ethereum uses account-based ledgers that store persistent state, leading to much larger data requirements over time and making it increasingly difficult for individuals to sync and maintain full nodes independently.5. True interoperability differs fundamentally from bridging - Real blockchain interoperability means being able to send assets directly between different blockchains (like sending ADA to a Bitcoin wallet) without intermediaries. This is possible between UTXO-based chains like Cardano and Bitcoin. Bridges, in contrast, require centralized entities to listen for transactions on one chain and trigger corresponding actions on another, introducing centralization risks.6. Mesh networks need economic incentives for sustainable infrastructure - While technologies like LoRa and Meshtastic enable impressive decentralized communication networks, the challenge lies in incentivizing people to maintain the hardware infrastructure. Mike sees potential in combining blockchain-based rewards (like earning ADA for running mesh network nodes) with existing decentralized communication protocols to create self-sustaining networks.7. Mass adoption comes through abstraction, not education - Rather than trying to educate everyone about blockchain technology, mass adoption will happen when developers can build applications on decentralized infrastructure that users interact with seamlessly, without needing to understand the underlying blockchain mechanics. Users should be able to benefit from decentralization through well-designed interfaces that abstract away the complexity of wallets, addresses, and consensus mechanisms.
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop speaks with Aaron Borger, founder and CEO of Orbital Robotics, about the emerging world of space robotics and satellite capture technology. The conversation covers a fascinating range of topics including Borger's early experience launching AI-controlled robotic arms to space as a student, his work at Blue Origin developing lunar lander software, and how his company is developing robots that can capture other spacecraft for refueling, repair, and debris removal. They discuss the technical challenges of operating in space - from radiation hardening electronics to dealing with tumbling satellites - as well as the broader implications for the space economy, from preventing the Kessler effect to building space-based recycling facilities and mining lunar ice for rocket fuel. You can find more about Aaron Borger’s work at Orbital Robots and follow him on LinkedIn for updates on upcoming missions and demos. Check out this GPT we trained on the conversationTimestamps00:00 Introduction to orbital robotics, satellite capture, and why sensing and perception matter in space 05:00 The Kessler Effect, cascading collisions, and why space debris is an economic problem before it is an existential one 10:00 From debris removal to orbital recycling and the idea of turning junk into infrastructure 15:00 Long-term vision of space factories, lunar ice, and refueling satellites to bootstrap a lunar economy 20:00 Satellite upgrading, servicing live spacecraft, and expanding today’s narrow space economy 25:00 Costs of collision avoidance, ISS maneuvers, and making debris capture economically viable 30:00 Early experiments with AI-controlled robotic arms, suborbital launches, and reinforcement learning in microgravity 35:00 Why deterministic AI and provable safety matter more than LLM hype for spacecraft control 40:00 Radiation, single event upsets, and designing space-safe AI systems with bounded behavior 45:00 AI, physics-based world models, and autonomy as the key to scaling space operations 50:00 Manufacturing constraints, space supply chains, and lessons from rocket engine software 55:00 The future of space startups, geopolitics, deterrence, and keeping space usable for humanityKey Insights1. Space Debris Removal as a Growing Economic Opportunity: Aaron Borger explains that orbital debris is becoming a critical problem with approximately 3,000-4,000 defunct satellites among the 15,000 total satellites in orbit. The company is developing robotic arms and AI-controlled spacecraft to capture other satellites for refueling, repair, debris removal, and even space station assembly. The economic case is compelling - it costs about $1 million for the ISS to maneuver around debris, so if their spacecraft can capture and remove multiple pieces of debris for less than that cost per piece, it becomes financially viable while addressing the growing space junk problem.2. Revolutionary AI Safety Methods Enable Space Robotics: Traditional NASA engineers have been reluctant to use AI for spacecraft control due to safety concerns, but Orbital Robotics has developed breakthrough methods combining reinforcement learning with traditional control systems that can mathematically prove the AI will behave safely. Their approach uses physics-based world models rather than pure data-driven learning, ensuring deterministic behavior and bounded operations. This represents a significant advancement over previous AI approaches that couldn't guarantee safe operation in the high-stakes environment of space.3. Vision for Space-Based Manufacturing and Resource Utilization: The long-term vision extends beyond debris removal to creating orbital recycling facilities that can break down captured satellites and rebuild them into new spacecraft using existing materials in orbit. Additionally, the company plans to harvest propellant from lunar ice, splitting it into hydrogen and oxygen for rocket fuel, which could kickstart a lunar economy by providing economic incentives for moon-based operations while supporting the growing satellite constellation infrastructure.4. Unique Space Technology Development Through Student Programs: Borger and his co-founder gained unprecedented experience by launching six AI-controlled robotic arms to space through NASA's student rocket programs while still undergraduates. These missions involved throwing and catching objects in microgravity using deep reinforcement learning trained in simulation and tested on Earth. This hands-on space experience is extremely rare and gave them practical knowledge that informed their current commercial venture.5. Hardware Challenges Require Innovative Engineering Solutions: Space presents unique technical challenges including radiation-induced single event upsets that can reset processors for up to 10 seconds, requiring "passive safe" trajectories that won't cause collisions even during system resets. Unlike traditional space companies that spend $100,000 on radiation-hardened processors, Orbital Robotics uses automotive-grade components made radiation-tolerant through smart software and electrical design, enabling cost-effective operations while maintaining safety.6. Space Manufacturing Supply Chain Constraints: The space industry faces significant manufacturing bottlenecks with 24-week lead times for space-grade components and limited suppliers serving multiple companies simultaneously. This creates challenges for scaling production - Orbital Robotics needs to manufacture 30 robotic arms per year within a few years. They've partnered with manufacturers who previously worked on Blue Origin's rocket engines to address these supply chain limitations and achieve the scale necessary for their ambitious deployment timeline.7. Emerging Space Economy Beyond Communications: While current commercial space activities focus primarily on communications satellites (with SpaceX Starlink holding 60% market share) and Earth observation, new sectors are emerging including AI data centers in space and orbital manufacturing. The convergence of AI, robotics, and space technology is enabling more sophisticated autonomous operations, from predictive maintenance of rocket engines using sensor data to complex orbital maneuvering and satellite servicing that was previously impossible with traditional control methods.
In this episode, Stewart Alsop sits down with Joe Wilkinson of Artisan Growth Strategies to talk through how vibe coding is changing who gets to build software, why functional programming and immutability may be better suited for AI-written code, and how tools like LLMs are reshaping learning, work, and curiosity itself. The conversation ranges from Joe’s experience living in China and his perspective on Chinese AI labs like DeepSeek, Kimi, Minimax, and GLM, to mesh networks, Raspberry Pi–powered infrastructure, decentralization, and what sovereignty might mean in a world where intelligence is increasingly distributed. They also explore hallucinations, AlphaGo’s Move 37, and why creative “wrongness” may be essential for real breakthroughs, along with the tension between centralized power and open access to advanced technology. You can find more about Joe’s work at https://artisangrowthstrategies.com and follow him on X at https://x.com/artisangrowth.Check out this GPT we trained on the conversationTimestamps00:00 – Vibe coding as a new learning unlock, China experience, information overload, and AI-powered ingestion systems05:00 – Learning to code late, Exercism, syntax friction, AI as a real-time coding partner10:00 – Functional programming, Elixir, immutability, and why AI struggles with mutable state15:00 – Coding metaphors, “spooky action at a distance,” and making software AI-readable20:00 – Raspberry Pi, personal servers, mesh networks, and peer-to-peer infrastructure25:00 – Curiosity as activation energy, tech literacy gaps, and AI-enabled problem solving30:00 – Knowledge work superpowers, decentralization, and small groups reshaping systems35:00 – Open source vs open weights, Chinese AI labs, data ingestion, and competitive dynamics40:00 – Power, safety, and why broad access to AI beats centralized control45:00 – Hallucinations, AlphaGo’s Move 37, creativity, and logical consistency in AI50:00 – Provenance, epistemology, ontologies, and risks of closed-loop science55:00 – Centralization vs decentralization, sovereign countries, and post-global-order shifts01:00:00 – U.S.–China dynamics, war skepticism, pragmatism, and cautious optimism about the futureKey InsightsVibe coding fundamentally lowers the barrier to entry for technical creation by shifting the focus from syntax mastery to intent, structure, and iteration. Instead of learning code the traditional way and hitting constant friction, AI lets people learn by doing, correcting mistakes in real time, and gradually building mental models of how systems work, which changes who gets to participate in software creation.Functional programming and immutability may be better aligned with AI-written code than object-oriented paradigms because they reduce hidden state and unintended side effects. By making data flows explicit and preventing “spooky action at a distance,” immutable systems are easier for both humans and AI to reason about, debug, and extend, especially as code becomes increasingly machine-authored.AI is compressing the entire learning stack, from software to physical reality, enabling people to move fluidly between abstract knowledge and hands-on problem solving. Whether fixing hardware, setting up servers, or understanding networks, the combination of curiosity and AI assistance turns complex systems into navigable terrain rather than expert-only domains.Decentralized infrastructure like mesh networks and personal servers becomes viable when cognitive overhead drops. What once required extreme dedication or specialist knowledge can now be done by small groups, meaning that relatively few motivated individuals can meaningfully change communication, resilience, and local autonomy without waiting for institutions to act.Chinese AI labs are likely underestimated because they operate with different constraints, incentives, and cultural inputs. Their openness to alternative training methods, massive data ingestion, and open-weight strategies creates competitive pressure that limits monopolistic control by Western labs and gives users real leverage through choice.Hallucinations and “mistakes” are not purely failures but potential sources of creative breakthroughs, similar to AlphaGo’s Move 37. If AI systems are overly constrained to consensus truth or authority-approved outputs, they risk losing the capacity for novel insight, suggesting that future progress depends on balancing correctness with exploratory freedom.The next phase of decentralization may begin with sovereign countries before sovereign individuals, as AI enables smaller nations to reason from first principles in areas like medicine, regulation, and science. Rather than a collapse into chaos, this points toward a more pluralistic world where power, knowledge, and decision-making are distributed across many competing systems instead of centralized authorities.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop talks with Umair Siddiqui about a wide range of interconnected topics spanning plasma physics, aerospace engineering, fusion research, and the philosophy of building complex systems, drawing on Umair’s path from hands-on plasma experiments and nonlinear physics to founding and scaling RF plasma thrusters for small satellites at Phase Four; along the way they discuss how plasmas behave at material boundaries, why theory often breaks in real-world systems, how autonomous spacecraft propulsion actually works, what space radiation does to electronics and biology, the practical limits and promise of AI in scientific discovery, and why starting with simple, analog approaches before adding automation is critical in both research and manufacturing, grounding big ideas in concrete engineering experience. You can find Umair on Linkedin.Check out this GPT we trained on the conversationTimestamps00:00 Opening context and plasma rockets, early interests in space, cars, airplanes 05:00 Academic path into space plasmas, mechanical engineering, and hands-on experiments 10:00 Grad school focus on plasma physics, RF helicon sources, and nonlinear theory limits 15:00 Bridging fusion research and space propulsion, Department of Energy funding context 20:00 Spin-out to Phase Four, building CubeSat RF plasma thrusters and real hardware 25:00 Autonomous propulsion systems, embedded controllers, and spacecraft fault handling 30:00 Radiation in space, single-event upsets, redundancy vs rad-hard electronics 35:00 Analog-first philosophy, mechanical thinking, and resisting premature automation 40:00 AI in science, low vs high hanging fruit, automation of experiments and insight 45:00 Manufacturing philosophy, incremental scaling, lessons from Elon Musk and production 50:00 Science vs engineering, concentration of effort, power, and progress in discoveryKey InsightsOne of the central insights of the episode is that plasma physics sits at the intersection of many domains—fusion energy, space environments, and spacecraft propulsion—and progress often comes from working directly at those boundaries. Umair Siddiqui emphasizes that studying how plasmas interact with materials and magnetic fields revealed where theory breaks down, not because the math is sloppy, but because plasmas are deeply nonlinear systems where small changes can produce outsized effects.The conversation highlights how hands-on experimentation is essential to real understanding. Building RF plasma sources, diagnostics, and thrusters forced constant confrontation with reality, showing that models are only approximations. This experimental grounding allowed insights from fusion research to transfer unexpectedly into practical aerospace applications like CubeSat propulsion, bridging fields that rarely talk to each other.A key takeaway is the difference between science and engineering as intent, not method. Science aims to understand, while engineering aims to make something work, but in practice they blur. Developing space hardware required scientific discovery along the way, demonstrating that companies can and often must do real science to achieve ambitious engineering goals.Umair articulates a strong philosophy of analog-first thinking, arguing that keeping systems simple and mechanical for as long as possible preserves clarity. Premature digitization or automation can obscure understanding, consume mental bandwidth, and even lock in errors before the system is well understood.The episode offers a grounded view of automation and AI in science, framing it in terms of low- versus high-hanging fruit. AI excels at exploring large parameter spaces and finding optima, but humans are still needed to judge physical plausibility, interpret results, and set meaningful directions.Space engineering reveals harsh realities about radiation, cosmic rays, and electronics, where a single particle can flip a bit or destroy a transistor. This drives design trade-offs between radiation-hardened components and redundant systems, reinforcing how environment fundamentally shapes engineering decisions.Finally, the discussion suggests that scientific and technological progress accelerates with concentrated focus and resources. Whether through governments, institutions, or individuals, periods of rapid advancement tend to follow moments where attention, capital, and intent are sharply aligned rather than diffusely spread.
In this episode of Crazy Wisdom, Stewart Alsop sits down with Javier Villar for a wide-ranging conversation on Argentina, Spain’s political drift, fiat money, the psychology of crowds, Dr. Hawkins’ levels of consciousness, the role of elites and intelligence agencies, spiritual warfare, and whether modern technology accelerates human freedom or deepens control. Javier speaks candidly about symbolism, the erosion of sovereignty, the pandemic as a global turning point, and how spiritual frameworks help make sense of political theater.Check out this GPT we trained on the conversationTimestamps00:00 Stewart and Javier compare Argentina and Spain, touching on cultural similarity, Argentinization, socialism, and the slow collapse of fiat systems.05:00 They explore Brave New World conditioning, narrative control, traditional Catholics, and the psychology of obedience in the pandemic.10:00 Discussion shifts to Milei, political theater, BlackRock, Vanguard, mega-corporations, and the illusion of national sovereignty under a single world system.15:00 Stewart and Javier examine China, communism, spiritual structures, karmic cycles, Kali Yuga, and the idea of governments at war with their own people.20:00 They move into Revelations, Hawkins, calibrations, conspiracy labels, satanic vs luciferic energy, and elites using prophecy as a script.25:00 Conversation deepens into ego vs Satan, entrapment networks, Epstein Island, Crowley, Masonic symbolism, and spiritual corruption.30:00 They question secularism, the state as religion, technology, AI, surveillance, freedom of currency, and the creative potential suppressed by government.35:00 Ending with Bitcoin, stablecoins, network-state ideas, U.S. power, Argentina’s contradictions, and whether optimism is still warranted.Key InsightsArgentina and Spain mirror each other’s decline. Javier argues that despite surface differences, both countries share cultural instincts that make them vulnerable to the same political traps—particularly the expansion of the welfare state, the erosion of sovereignty, and what he calls the “Argentinization” of Spain. This framing turns the episode into a study of how nations repeat each other’s mistakes.Fiat systems create a controlled collapse rather than a dramatic one. Instead of Weimar-style hyperinflation, Javier claims modern monetary structures are engineered to “boil the frog,” preserving the illusion of stability while deepening dependency on the state. This slow-motion decline is portrayed as intentional rather than accidental.Political leaders are actors within a single global architecture of power. Whether discussing Milei, Trump, or European politics, Javier maintains that governments answer to mega-corporations and intelligence networks, not citizens. National politics, in this view, is theater masking a unified global managerial order.Pandemic behavior revealed mass submission to narrative control. Stewart and Javier revisit 2020 as a psychological milestone, arguing that obedience to lockdowns and mandates exposed a widespread inability to question authority. For Javier, this moment clarified who can perceive truth and who collapses under social pressure.Hawkins’ map of consciousness shapes their interpretation of good and evil. They use the 200 threshold to distinguish animal from angelic behavior, exploring whether ego itself is the “Satanic” force. Javier suggests Hawkins avoided explicit talk of Satan because most people cannot face metaphysical truth without defensiveness.Elites rely on symbolic power, secrecy, and coercion. References to Epstein Island, Masonic symbolism, and intelligence-agency entrapment support Javier’s view that modern control systems operate through sexual blackmail, ritual imagery, and hidden hierarchies rather than democratic mechanisms.Technology’s promise is strangled by state power. While Stewart sees potential in AI, crypto, and network-state ideas, Javier insists innovation is meaningless without freedom of currency, association, and exchange. Technology is neutral, he argues, but becomes a tool of surveillance and control when monopolized by governments.
In this episode of Crazy Wisdom, I—Stewart Alsop—sit down with Garrett Dailey to explore a wide-ranging conversation that moves from the mechanics of persuasion and why the best pitches work by attraction rather than pressure, to the nature of AI as a pattern tool rather than a mind, to power cycles, meaning-making, and the fracturing of modern culture. Garrett draws on philosophy, psychology, strategy, and his own background in storytelling to unpack ideas around narrative collapse, the chaos–order split in human cognition, the risk of “AI one-shotting,” and how political and technological incentives shape the world we're living through. You can find the tweet Stewart mentions in this episode here. Also, follow Garrett Dailey on Twitter at @GarrettCDailey, or find more of his pitch-related work on LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Garrett opens with persuasion by attraction, storytelling, and why pitches fail with force. 05:00 We explore gravity as metaphor, the opposite of force, and the “ring effect” of a compelling idea. 10:00 AI as tool not mind; creativity, pattern prediction, hype cycles, and valuation delusions. 15:00 Limits of LLMs, slopification, recursive language drift, and cultural mimicry. 20:00 One-shotting, psychosis risk, validation-seeking, consciousness vs prediction. 25:00 Order mind vs chaos mind, solipsism, autism–schizophrenia mapping, epistemology. 30:00 Meaning, presence, Zen, cultural fragmentation, shared models breaking down. 35:00 U.S. regional culture, impossibility of national unity, incentives shaping politics. 40:00 Fragmentation vs reconciliation, markets, narratives, multipolarity, Dune archetypes. 45:00 Patchwork age, decentralization myths, political fracturing, libertarian limits. 50:00 Power as zero-sum, tech-right emergence, incentives, Vance, Yarvin, empire vs republic. 55:00 Cycles of power, kyklos, democracy’s decay, design-by-committee, institutional failure.Key InsightsPersuasion works best through attraction, not pressure. Garrett explains that effective pitching isn’t about forcing someone to believe you—it’s about creating a narrative gravity so strong that people move toward the idea on their own. This reframes persuasion from objection-handling into desire-shaping, a shift that echoes through sales, storytelling, and leadership.AI is powerful precisely because it’s not a mind. Garrett rejects the “machine consciousness” framing and instead treats AI as a pattern amplifier—extraordinarily capable when used as a tool, but fundamentally limited in generating novel knowledge. The danger arises when humans project consciousness onto it and let it validate their insecurities.Recursive language drift is reshaping human communication. As people unconsciously mimic LLM-style phrasing, AI-generated patterns feed back into training data, accelerating a cultural “slopification.” This becomes a self-reinforcing loop where originality erodes, and the machine’s voice slowly colonizes the human one.The human psyche operates as a tension between order mind and chaos mind. Garrett’s framework maps autism and schizophrenia as pathological extremes of this duality, showing how prediction and perception interact inside consciousness—and why AI, which only simulates chaos-mind prediction, can never fully replicate human knowing.Meaning arises from presence, not abstraction. Instead of obsessing over politics, geopolitics, or distant hypotheticals, Garrett argues for a Zen-like orientation: do what you're doing, avoid what you're not doing. Meaning doesn’t live in narratives about the future—it lives in the task at hand.Power follows predictable cycles—and America is deep in one. Borrowing from the Greek kyklos, Garrett frames the U.S. as moving from aristocracy toward democracy’s late-stage dysfunction: populism, fragmentation, and institutional decay. The question ahead is whether we’re heading toward empire or collapse.Decentralization is entropy, not salvation. Crypto dreams of DAOs and patchwork societies ignore the gravitational pull of power. Systems fragment as they weaken, but eventually a new center of order emerges. The real contest isn’t decentralization vs. centralization—it’s who will have the coherence and narrative strength to recentralize the pieces.
In this episode of Crazy Wisdom, Stewart Alsop talks with Aaron Lowry about the shifting landscape of attention, technology, and meaning—moving through themes like treasure-hunt metaphors for human cognition, relevance realization, the evolution of observational tools, decentralization, blockchain architectures such as Cardano, sovereignty in computation, the tension between scarcity and abundance, bioelectric patterning inspired by Michael Levin’s research, and the broader cultural and theological currents shaping how we interpret reality. You can follow Aaron’s work and ongoing reflections on X at aaron_lowry.Check out this GPT we trained on the conversationTimestamps00:00:00 Stewart and Aaron open with the treasure-hunt metaphor, salience landscapes, and how curiosity shapes perception. 00:05:00 They explore shifting observational tools, Hubble vs James Webb, and how data reframes what we think is real. 00:10:00 The conversation moves to relevance realization, missing “Easter eggs,” and the posture of openness. 00:15:00 Stewart reflects on AI, productivity, and feeling pulled deeper into computers instead of freed from them. 00:20:00 Aaron connects this to monetary policy, scarcity, and technological pressure. 00:25:00 They examine voice interfaces, edge computing, and trust vs convenience. 00:30:00 Stewart shares experiments with Raspberry Pi, self-hosting, and escaping SaaS dependence. 00:35:00 They discuss open-source, China’s strategy, and the economics of free models. 00:40:00 Aaron describes building hardware–software systems and sensor-driven projects. 00:45:00 They turn to blockchain, UTXO vs account-based, node sovereignty, and Cardano. 00:50:00 Discussion of decentralized governance, incentives, and transparency. 00:55:00 Geopolitics enters: BRICS, dollar reserve, private credit, and institutional fragility. 01:00:00 They reflect on the meaning crisis, gnosticism, reductionism, and shattered cohesion. 01:05:00 Michael Levin, bioelectric patterning, and vertical causation open new biological and theological frames. 01:10:00 They explore consciousness as fundamental, Stephen Wolfram, and the limits of engineered solutions. 01:15:00 Closing thoughts on good-faith orientation, societal transformation, and the pull toward wilderness.Key InsightsCuriosity restructures perception. Aaron frames reality as something we navigate more like a treasure hunt than a fixed map. Our “salience landscape” determines what we notice, and curiosity—not rigid frameworks—keeps us open to signals we would otherwise miss. This openness becomes a kind of existential skill, especially in a world where data rarely aligns cleanly with our expectations.Our tools reshape our worldview. Each technological leap—from Hubble to James Webb—doesn’t just increase resolution; it changes what we believe is possible. Old models fail to integrate new observations, revealing how deeply our understanding depends on the precision and scope of our instruments.Technology increases pressure rather than reducing it. Even as AI boosts productivity, Stewart notices it pulling him deeper into computers. Aaron argues this is systemic: productivity gains don’t free us; they raise expectations, driven by monetary policy and a scarcity-based economic frame.Digital sovereignty is becoming essential. The conversation highlights the tension between convenience and vulnerability. Cloud-based AI creates exposure vectors into personal life, while running local hardware—Raspberry Pis, custom Linux systems—restores autonomy but requires effort and skill.Blockchain architecture determines decentralization. Aaron emphasizes the distinction between UTXO and account-based systems, arguing that UTXO architectures (Bitcoin, Cardano) support verifiable edge participation, while account-based chains accumulate unwieldy state and centralize validation over time.Institutional trust is eroding globally. From BRICS currency moves to private credit schemes, both note how geopolitical maneuvers signal institutional fragility. The “few men in a room” dynamic persists, but now under greater stress, driving more people toward decentralization and self-reliance.Biology may operate on deeper principles than genes. Michael Levin’s work on bioelectric patterning opens the door to “vertical causation”—higher-level goals shaping lower-level processes. This challenges reductionism and hints at a worldview where consciousness, meaning, and biological organization may be intertwined in ways neither materialism nor traditional theology fully capture.
In this conversation, Stewart Alsop sits down with Ken Lowry to explore a wide sweep of themes running through Christianity, Protestant vs. Catholic vs. Orthodox traditions, the nature of spirits and telos, theosis and enlightenment, information technology, identity, privacy, sexuality, the New Age “Rainbow Bridge,” paganism, Buddhism, Vedanta, and the unfolding meaning crisis; listeners who want to follow more of Ken’s work can find him on his YouTube channel Climbing Mount Sophia and on Twitter under KenLowry8.Check out this GPT we trained on the conversationTimestamps00:00 Christianity’s tangled history surfaces as Stewart Alsop and Ken Lowry unpack Luther, indulgences, mediation, and the printing-press information shift.05:00 Luther’s encounters with the devil lead into talk of perception, hallucination, and spiritual influence on “main-character” lives.10:00 Protestant vs. Catholic vs. Orthodox worship styles highlight telos, Eucharist, liturgy, embodiment, and teaching as information.15:00 The Church as a living spirit emerges, tied to hierarchy, purpose, and Michael Levin’s bioelectric patterns shaping form.20:00 Spirits, goals, Dodgers-as-spirit, and Christ as the highest ordering spirit frame meaning and participation.25:00 Identity, self, soul, privacy, intimacy, and the internet’s collapse of boundaries reshape inner life.30:00 New Age, Rainbow Bridge, Hawkins’ calibration, truth-testing, and spiritual discernment enter the story.35:00 Stewart’s path back to Christianity opens discussion of enlightenment, Protestant legalism, Orthodox theosis, and healing.40:00 Emptiness, relationality, Trinity, and personhood bridge Buddhism and mystical Christianity.45:00 Suffering, desire, higher spirits, and orientation toward the real sharpen the contrast between simulation and reality.50:00 Technology, bodies, AI, and simulated worlds raise questions of telos, meaning, and modern escape.55:00 Neo-paganism, Hindu hierarchy of gods, Vedanta, and the need for a personal God lead toward Jesus as historical revelation.01:00:00 Buddha, enlightenment, theosis, the post-1945 world, Hitler as negative pole, and goodness as purpose close the inquiry.Key InsightsMediation and information shape the Church. Ken Lowry highlights how the printing press didn’t just spread ideas—it restructured Christian life by shifting mediation. Once information became accessible, individuals became the “interface” with Christ, fundamentally changing Protestant, Catholic, and Orthodox trajectories and the modern crisis of religious choice.The Protestant–Catholic–Orthodox split hinges on telos. Protestantism orients the service around teaching and information, while Catholic and Orthodox traditions culminate in the Eucharist, embodiment, and liturgy. This difference expresses two visions of what humans are doing in church: receiving ideas or participating in a transformative ritual that shapes the whole person.Spirits, telos, and hierarchy offer a map of reality. Ken frames spirits as real intelligible goals that pull people into coordinated action—seen as clearly in a baseball team as in a nation. Christ is the highest spirit because aiming toward Him properly orders all lower goals, giving a coherent vertical structure to meaning.Identity, privacy, and intimacy have transformed under the internet. The shift from soul → self → identity tracks changes in information technology. The internet collapses boundaries, creating unprecedented exposure while weakening the inherent privacy of intimate realities such as genuine lovemaking, which Ken argues can’t be made public without destroying its nature.New Age influences and Hawkins’ calibration reflect a search for truth. Stewart’s encounters with the Rainbow Bridge world, David Hawkins’ muscle-testing epistemology, and the escape from scientistic secularism reveal a cultural hunger for spiritual discernment in the absence of shared metaphysical grounding.Enlightenment and theosis may be the same mountain. Ken suggests that Buddhist enlightenment and Orthodox theosis aim at the same transformative reality: full communion with what is most real. The difference lies in Jesus as the concrete, personal revelation of God, offering a relational path rather than pure negation or emptiness.Secularism is shaped by powerfully negative telos. Ken argues that the modern world orients itself not toward the Good revealed in Christ but away from the Evil revealed in Hitler. Moving away from evil as a primary aim produces confusion, because only a positive vision of the Good can order desires, technology, suffering, and the overwhelming power of modern simulations.























