DiscoverThe Eighth
The Eighth
Claim Ownership

The Eighth

Author: Avraham Raskin

Subscribed: 0Played: 0
Share

Description

A future-facing podcast that explores how augmented reality will show up in our everyday lives, off-screen and all around us. Each episode features everyday people (not tech insiders) grappling with how this new layer of reality could reshape the way we shop, move, learn, or just get through the day. These conversations take abstract ideas and ground them in the practical, turning sci-fi into Saturday morning.
70 Episodes
Reverse
I realised recently that the time I was spending sharing my ideas was actually strangling the sharing of ideas themselves. In this BrainStream, I'm opening up about why I'm working to automate my entire production pipeline—from transcripts to clips— to keep sharing. I think we're entering an era where 'janking' together AI tools like Claude Code and Remotion isn't just a hack; it's the only way to keep up with the ROI of our own thoughts. I wonder if the future of the UI isn't just talking to our computers, but having them understand the visual and emotional context of our work before we even ask.Timestamps(0:00) The Guatemala Vision Pro Backlog(2:15) When the ROI of Sharing Doesn't Make Sense(4:10) From 2017 Manual Edits to AI Pipelines(6:20) Automating the 'Boring Stuff': Clips & Captions(8:10) The Post-Siri World: OpenClaw & Agentic FilesystemsListen OnIf you enjoyed this episode, you can listen to more of my brainstreams on Spotify, YouTube, or Apple Podcasts.Check out my website for more insights: https://avrahamrask.in/podcast
We’re finally breaking free from the 1980s desktop paradigm and moving into a world where creation isn’t tied to a workstation. I’m diving into how I finally cracked the code on local automation for the BrainStream, bringing us closer to a future of conversational agents that actually understand and build alongside us. From my early work on the Whisper project to the current shift in spatial computing, we’re exploring how we can stop being tethered to a desk and start leveraging high-fidelity AI collaboration wherever we go.Timestamps[0:00] From GUI to Conversational OS[1:36] The Whisper Vision: Untethered AI[3:12] The High Cost of Manual Friction[4:30] Custom Pipelines Over SaaS Bloat[6:20] Local Video Clipping with Whisper JSON[8:10] Iterative Media & Generative AssetsListen OnIf you enjoyed this episode, you can listen to more of my brainstreams on Spotify, YouTube, or Apple Podcasts.Check out my website for more insights: https://avrahamrask.in/podcast
The velocity of innovation right now is insane—we’re seeing a year’s worth of progress every week—and the only way to keep up is to automate the friction out of our systems. I’ve finally dialed in a pipeline that handles the heavy lifting of post-production and distribution with a single click, freeing me up to dive deeper into the high-stakes security projects I’ve been teasing. This episode is a direct look at how I’m using AI to reclaim my time, moving away from manual content management and back toward pure insight and execution. If we're going to build the future, we can't afford to be slowed down by the logistics of the present.TIMESTAMPS:0:00 | Navigating the AI Acceleration Curve1:45 | Why Content Friction Stalls Big Ideas3:20 | The Utility of the Spoken Word5:10 | Solving Local vs. Cloud Friction6:55 | Video as Code: The Remotion Shift8:30 | Closing the Agent-Driven LoopLISTEN ONSpotify: https://open.spotify.com/show/0gyQydQ9h5PM2gGUWy1PfCYouTube: https://youtube.com/playlist?list=PLifoYer83q8ACtYWCtjhoOHGmfslydLM7&si=yKhmWU6J9t6j9USFApple Podcasts: https://podcasts.apple.com/us/podcast/id1726877985Website: https://avrahamrask.in/podcast
In this year-end Brainstream, Avraham revisits a 2018–2019 video exploring how augmented reality and AI could transform everyday problem solving. Using the example of fixing a broken fridge, he unpacks multimodal AI—vision, sound, and conversation working together—and reflects on how concepts from seven years ago are becoming increasingly practical today. This episode also considers the evolving relationship between humans and AI in collaborative problem solving.TL;DRA seven-year-old video predicting AI-assisted, AR-driven home repairs highlights how multimodal AI—integrating vision, sound, and conversation—can empower humans to solve problems collaboratively without experts or cloud dependency.🎧 Listen on Spotify | YouTube | Apple PodcastsListen to the full episode and explore more Brainstream → https://avrahamraskin.com/podcast
A quiet rumour about OpenAI tapping into Apple Health sparks a deeper question: what would a truly private, on-device health intelligence actually look like? Listen on Spotify, YouTube, Apple PodcastsMore episodes → https://avrahamraskin.com/podcastTimestamps00:00 | The Rumour About OpenAI And Apple Health01:06 | Why Access To Raw Health Data Actually Matters01:45 | The Privacy Problem With Cloud-Based LLMs02:26 | The Case For A True On-Device Reasoning Model03:12 | Hardware Limits And Why Models Must Shrink04:26 | What A Smarter Siri Should Really Be05:14 | Health Metrics, Trends, And Real Coaching06:24 | The Missing Link: Nutrition Data07:12 | Rumours Of Apple HealthPlus08:02 | What Real AI Coaching Should Feel Like09:07 | Integrating Sleep, Training, Journaling, And Goals09:48 | The Vision: A Private, Personal, On-Device Coach
In this BrainStream, I explore why we’re on the edge of a generational shift in how properties are secured. Not just sensors. Not just cameras. A true, always-awake operator built from VLMs, re-identification, and on-prem AI. I look at Ubiquiti’s position, emerging players, and why the “virtual guard” model is no longer sci-fi but an inevitable next step.TL;DRSecurity is moving from alarms to an always-awake, property-aware AI operators that verifies, learns, and predicts.🎧 Listen Now at Spotify, YouTube, or Apple Podcasts→ More at: https://avrahamraskin.com/podcastTimestamps00:00 | Why I Had to Record One More00:21 | The Ubiquiti/UniFi Fascination01:31 | The “Ultimate Operator” Concept02:20 | Parameters → Description: A Real Turning Point03:21 | Teaching an AI the Property (The Guard Analogy)04:41 | From Virtual Guards to Site-Aware Intelligence05:22 | Alarm Monitoring: The Missing Link06:13 | Camera Verification and VLM Reasoning08:48 | Prediction Instead of Reaction09:51 | Closing the Loop: Where This Is Heading#Ubiquiti #UniFi #VideoOperator #VLM #Re-ID #Alarm #Verification #PredictiveSecurity #On-Prem AI #ComputerVision #Security #Future
A concise, investigative tour of how security video evolved from passive cctv to intelligent, searchable footage powered by local ai and video-language models. I maps the technical lineage-motion sensing, smart detections, face and plate id, the “AI Key,” and scene-level vlm search-and explain why pattern discovery at scale is the next operational leap for site security and investigations.“video that used to be passive now becomes a searchable narrative.”🎧 listen on spotify, youtube, apple podcasts🔗 more episodes → https://avrahamraskin.com/podcasttl;drsecurity cameras have graduated from passive recorders to active, searchable sensors. video-language models (vlms) and local llm-like agents enable natural-language scene search and condensed pattern visualisations-powerful for investigations but constrained today by compute and edge deployment. the next frontier is real-time, site-wide pattern detection running at the edge.timestamps00:00 | introduction and context00:23 | the evolution: cctv → motion → smart detections01:56 | face detection, license plates, and granular id02:19 | the “ai key”: local llm-style analytics (what it adds)03:35 | video-language models: frame description → search05:03 | practical investigative tools and scene search examples06:03 | pattern discovery: briefcam and condensed timelines07:40 | limitations today: compute, edge, and the next step09:56 | closing thoughts and what’s next
Most security systems still behave like they did 20 years ago-reactive, limited, and blind to the context hidden inside their own recordings. In this Brainstream, we explore why the real frontier in security isn’t better alerts or higher-resolution cameras, but AI systems that can learn a site’s patterns, behaviours, anomalies, and risks over months of recorded footage. This episode outlines the shift from “review after the incident” to “predict before it happens,” and why the intelligence trapped inside our footage is the most valuable, unused asset in modern security.TL;DRSecurity cameras shouldn’t just replay the past-they should understand it. When indexed, analysed, and contextualised, months of footage can power predictive, site-specific intelligence far beyond traditional monitoring.🎧 Listen on ⁠⁠Spotify⁠, ⁠⁠YouTube⁠⁠, ⁠⁠Apple Podcasts⁠⁠🔗 More episodes → ⁠https://avrahamraskin.com/podcast⁠Timestamps00:00 | Opening: Why talk about the future of security00:05 | Why this topic needs multiple videos00:08 | A new product direction after years in the field00:19 | The core problem: cameras are reactive00:26 | Footage as an investigative tool, not a live one00:34 | Tools like BriefCam and condensed investigations00:51 | The inevitability of deep pattern analysis01:17 | Rethinking what recorded footage really contains01:26 | On-site storage vs cloud motion clips01:48 | Why modern systems rarely store “everything”02:14 | The hidden value inside long-term footage02:27 | Thought experiment: downloading 6 months of footage into a guard03:06 | Scale: 25–100 cameras, months of data03:25 | What context a human misses vs what the data contains03:58 | Reviewing footage: hours, days, weeks04:25 | Pattern detection after the fact04:54 | The industry’s stuck in reactive mode05:02 | Moving from reactive to predictive05:17 | Connecting dots before the incident05:24 | Trends, anomalies, and site-specific patterns05:34 | What good security guards actually do06:00 | Knowing who belongs and who doesn’t06:13 | Cameras should be able to learn the same06:22 | Context → patterns → prediction06:34 | Generations of camera evolution07:00 | Smart detections: person, car, face, plate07:14 | More granular detection: clothing, colours, models07:36 | Natural-language retrieval: next-generation search07:56 | But still mostly reactive08:03 | True intelligence: learning the site itself08:12 | Threat assessment powered by context08:26 | The massive, untapped value in indexed footage08:51 | Behaviour understanding vs object detection09:04 | AI as a security operator/assistant09:14 | Cameras becoming proactive09:20 | Future episodes: alarms, sensors, monitoring09:33 | Industry progress & uneven advancement09:42 | Why pattern understanding changes everything09:57 | Closing: A new era is coming
What if you could circle the entire Old City of Jerusalem - comfortably, accessibly, and beautifully? In this episode of Brainstreams, we explore a future-facing vision for a dedicated light rail line that loops around the Old City walls. From smart routing and grade calculations to archaeological sensitivity and pedestrian flow, this Brainstream dives deep into a generative design sprint born from both data and lived experience.TL;DR:A Jerusalem city loop light rail around all seven gates could transform access, beauty, and spiritual flow - this is the design pitch.We discuss multi-line integration (red, brown, yellow), how to preserve archaeology while opening new paths, grading challenges, and why a humble footpath + rail combo might be the most transformative layer Jerusalem’s had in decades.🎧 Listen now on ⁠Spotify⁠, ⁠YouTube⁠, or ⁠Apple Podcasts⁠→ More at ⁠avrahamraskin.com/podcast⁠Timestamps:00:00 | Intro: What is the Old City Loop?00:43 | Green space, walking path, and rail: the triple-ring vision01:58 | The gaps in Jerusalem’s current light rail access03:06 | Seven gates, one loop: walking through the plan05:01 | Grades, curvature, and optimal timing06:13 | Integrating with existing and future lines (Red, Yellow, Brown)07:30 | One-way loop + second-direction bus support09:23 | Accessibility at Shaar Tzion & other grading concerns10:00 | Park & Ride potential near Damascus and Dung Gates12:04 | New Gate tradeoffs & archaeological bridge ideas13:06 | Solving steep terrain with design sensitivity14:39 | Closing thoughts: Could this really happen?
What if kosher products simply revealed themselves—- guessing, no flipping, no frantic Googling? In this Brainstream, we explore a futuristic vision that’s already partially built: an augmented-reality-based kosher shopping experience. Born from the frustration of finding kosher food in Australia, the idea reimagines the process from the ground up-no lists, no scanning, just visual certainty.TL;DR:Imagine pointing your phone at a supermarket shelf and instantly seeing which products are kosher-powered by AR, crowdsourcing, and smart databases. This Brainstream lays out that vision.We also dive into the pain of local-only kosher databases, the promise of contributor modes, and how we can use computer vision, gamification, and community to create the ultimate global kosher assistant.🎧 Listen now on Spotify, YouTube, or Apple Podcasts→ More at avrahamraskin.com/podcastTimestamps:00:00 — The AR Kosher Shopping Vision00:49 — Growing Up Without Easy Kosher Access02:00 — Kosher Australia App: Helpful But Limited03:04 — The Chocolate Shelf Incident (and UX friction)04:11 — Barcode Scanning: A Step Forward, Still Painful05:25 — Every Country Has Its Own App (if you’re lucky)06:03 — Why Kosher Data Shouldn’t Be Gatekept07:00 — Introducing Contributor Mode & Crowdsourced Verification08:13 — How OCR + Computer Vision Enable Instant Recognition09:23 — Game Mechanics for Certifying Products10:22 — Verification Badges & Authority Endorsements12:01 — The Pay-It-Forward Incentive Loop for Travellers13:03 — Making Kashrut Fun, Seamless, and Fluid14:01 — The Auto-Scan Dream and Proximity-Based Info
Apple’s WWDC recap wasn’t just about shiny new models or flashy demos — it revealed a vision of computing where intelligence lives closer to you than ever before. In this Brainstream, I explore how the shift to lightweight, on-device AI isn’t just a technical update — it’s a philosophical shift. We touch on Apple’s missed shortcuts, the layers of Layer[1], and how user experience needs to evolve into something fluid, not fragmented.TL;DR:This episode unpacks Apple’s approach to foundation models, live transcription, node-based automation, and the conceptual gaps between their vision and where it could be going — especially in relation to LayerOS.Let’s talk interface, operating systems, and why the real future is about how intelligence flows across your tools.Listen to Layer[1] on:Website, Spotify, YouTube, Apple PodcastsTimestamps:00:00 – Quick intro: revisiting last year’s predictions00:12 – Apple’s new live FaceTime translation00:34 – Transcription gaps & WhatsApp voice-to-text issues01:54 – What SiriGPT should’ve been02:20 – Layered apps vs. intelligent flow03:22 – Widgets, CarPlay & Apple’s game focus04:04 – Apple’s foundation model + Sam Altman’s logic05:13 – Shortcut builder dreams vs. Apple’s reality07:03 – Building layered automation with AI & prompts08:55 – Smarter Maps: learning user behaviour09:43 – A wish for transcription + text-to-speech everywhere10:00 – Wrap-up & future thoughts
What happens when websites evolve into worlds?In this brainstream, we explore Apple's VisionOS updates and what they mean for the future of digital interfaces. From spatial browsing to the beginnings of a 3D retail internet, Apple’s slow but deliberate steps hint at a massive shift. I unpack spatial scenes, shared experiences, and how these early tools are shaping tomorrow's web.We also look at how your living room might become a showroom, how eye tracking and intent will change UX, and why Gausian Splatting will play a major role in immersive AR. There’s plenty to critique — and even more to imagine.TL;DR:Apple’s VisionOS continues to push towards a spatial future. It’s not full AR yet, but the shift to immersive interfaces is undeniable. Spatial browsing, personalised retail, and shared digital experiences are just the beginning.Timestamps:00:00 - VisionOS and Spatial Computing Overview00:22 - Input and Control: Eye Tracking, Gestures, and Controllers01:16 - Spatial Scenes and Apple’s 3D Browsing Effects02:13 - Gaussian Splatting & Scanning with iPhone LiDAR03:14 - Apple Personas, Overlays & Presenter UX04:02 - Eye Tracking, Gesture Control & Intent04:59 - Widgets as Portals & Spatial Weather Concepts06:02 - Enterprise Use Cases & Shared Digital Experiences06:48 - Safari’s Spatial Browsing and the Future of Retail UX08:28 - Tailored, Transformative Websites09:40 - Wrap-up: VisionOS Today and Tomorrow
The OS of the future won’t rely on mouse clicks and app folders — it will listen, respond, and build with you. In this Brainstream, I explore the next interface evolution: intelligent agents layered over your data, shortcut automation powered by natural language, and why Apple Mail still feels like it’s stuck in 2009.We dive into macOS and tvOS updates, talk frustrations with email, and uncover where Apple’s getting it right — and where it still falls short.TL;DR:Apple’s current updates feel like polishing old tools. We need conversational OS layers, shortcut agents, and a voice-powered rethink of creation — from building slides to managing email.🎧 Listen to Layer[1]:→ Website→ Spotify→ YouTube→ Apple PodcastsTimestamps:00:00 TVOS redesign & lyric sync wish list01:36 First thoughts on MacOS: design tweaks and frustrations with Mail02:25 Why Mail needs to evolve into an intelligent assistant03:24 Tools used to improve communication (spreadsheet, Freeform, etc.)04:44 Spotlight & app drawer comparisons05:05 Clipboard history finally arrives05:35 Vision for OS: executive assistant model06:04 Shortcut “micro-prompt” ideas and automation possibilities07:08 Email needs a rework: current apps aren’t cutting it08:04 Journal Live Actions & iPhone mirroring feedback08:58 Thoughts on intelligent shortcut actions & spotlight triggers09:32 Outro
Apple’s WWDC just dropped a lot of new features—but did they miss the forest for the trees? In this episode, we dive into what was announced, what’s still missing, and where Apple’s headed when it comes to automation, personalised AI, and layered UI. From the unrealised promise of natural language Shortcuts to the potential of a true AI-powered workout coach, this Brainstream breaks it down.TL;DR:Shortcuts still aren't natural language, WatchOS introduces a coach concept, but it's far from personal, and Apple's layering between basic and complex apps still needs an overhaul.👉 Listen to this Brainstream and discover what Apple could do next if they really embraced personalisation and agentic AI.🎧 Listen to Layer[1] on Website, Spotify, YouTube, Apple PodcastsTimestamps:00:00 Missed Stage Manager and Shortcut frustration00:29 Siri Shortcut limitations & missed opportunities01:20 Why natural language Shortcuts matter02:09 The case for agentic Siri + LLM sidebar03:10 WatchOS hands-free navigation03:38 Smart layering between notes and pages04:13 Workout Buddy: potential vs reality05:14 Personalised coaching dream with AI06:08 Tabata playlists, real-time motivation, and coaching tone08:00 What could make the “Workout Buddy” truly smart09:11 Beyond metrics: building a real AI coach
Apple’s WWDC didn’t just show off a new iOS—it hinted at an entirely new direction for how we’ll interact with our devices. From foundation models and voice transcription to a potential LayerOS-style rethink of app flow, this Brainstream dives deep into what’s here, what’s missing, and where Apple could go next.I reflect on past predictions, critique what Apple got right and wrong, and imagine a more fluid, AI-native interface that understands your workflows natively.TL;DR: Apple’s AI features are evolving, but the future is LayerOS—modular, intelligent, and seamless.Listen to Layer[1] on Website, Spotify, YouTube, Apple PodcastsTimestamps:00:00 – Revisiting last year’s predictions00:34 – Voice transcription still lacks accuracy01:46 – Natural language shortcut building is still a dream02:56 – LayerOS-style unified app flows03:22 – Apple Silicon, Metal, and gaming push04:00 – Foundation model access and the “small model” future05:36 – Shortcuts, node-based automation, and infinite canvases08:55 – AI-enhanced routing and live translation09:36 – On-device transcription and APIs
Apple’s WWDC left subtle clues pointing to a major shift: the interface is no longer just flat and functional — it’s becoming physical, emotive, and alive.From liquid glass UI metaphors to camera-based visual intelligence, this Brainstream unpacks what Apple didn’t explicitly say — and what it all means for the future of interaction, accessibility, and space-aware design.Whether it’s how screenshots might evolve into something more dynamic, or how adaptive widgets in the sky point toward contextual computing, this is a brainstorm that blends critique with wonder.TL;DR: Apple’s visual choices hint at a paradigm shift toward physical, expressive, and spatial UIs. Liquid glass and screenshot-based interaction are more than design fluff — they’re the front line of new spatial thinking.👇 Tap into this Brainstream and get ready to reimagine interface design.Listen to Layer[1] on 🌐 Website 🎧 Spotify 📺 YouTube 🍎 Apple PodcastsTimestamps:00:00 – Opening thoughts & FaceTime presenter overlay gripes01:30 – Liquid glass: physics, aesthetics & UI evolution03:00 – Screenshots, refractivity & accessibility trade-offs05:00 – Why liquid glass is more than a gimmick06:00 – Visual intelligence and lock screen widgets08:00 – Frustrations with MixEmoji and Apple’s image gen gap09:00 – Final thoughts on vision and what’s missing
We’re two days post-WWDC, and my brain’s still buzzing. This isn’t a keynote recap — it’s a deep dive into why Apple’s “liquid glass” aesthetic signals something bigger. Think beyond the frosted UI — think spatial interfaces, contextual computing, and the death of the traditional app.What if the era of “learning the tool” is over, and we’re entering a time when tools learn us?I unpack Apple’s slow-burn philosophy, my long-held vision for a post-app world, and why these new design cues are shaping more than just your phone screen — they’re nudging the whole paradigm of computing forward.👉 If you’re interested in how AI agents, spatial design, and user-centric computing collide — this one’s for you.TL;DR (Takeaways)Apple’s UI shift hints at a post-app world, not just visual polish.Liquid Glass = prepping for spatial computing, not skeuomorphic nostalgia.Traditional “tools” are giving way to agents that understand and work for us.This isn’t just design — it’s the architecture of a new computing mindset.Timestamps:00:00 – WWDC: Expectations vs Reality01:22 – TL;DR reaction to Apple’s approach02:26 – Why we didn’t get flashy AI features03:21 – Thoughts on Liquid Glass and accessibility issues04:24 – The move toward spatial interface norms06:03 – Rethinking screen limitations in the real world06:44 – Introducing the post-app world concept07:49 – Agents vs. tools: A paradigm shift08:36 – The new master-slave dynamic in computing09:28 – Liquid Glass: Sci-fi gimmick or visionary leap?10:00 – Wrap-up & coming brainstormListen to Layer[1] → Website, Spotify, YouTube and Apple Podcasts
We’re reaching the limits of app-based creation—and there’s a new vision in town: Layer OS. What if your tools weren’t chosen before you started working, but emerged during the process, adapting as you moved?TL;DRToday’s apps are fragmented silos; LayerOS proposes a fluid, evolving interface.Tools should appear as you work—not force you to choose them upfront.Image and AI workflows are currently too separate; integration is overdue.Multimodal creation and predictive interfaces are the next UI frontier.LayerOS could function like an LLM OS—predicting interface needs instead of next words.In this brainstream, I go deep into the fractured nature of today’s apps and why we need a unified, predictive interface model—where your idea grows, and the interface evolves with it. Think of it like the operating system powered by AI prediction, surfacing tools layer by layer. I also touch on image generation, multimodal workspaces, and how we might finally break free from app silos.Ready to move beyond apps and into layers? Let’s build the future.🎧 Listen to more brainstream of layer[1] now 👇Website, Spotify, YouTube, Apple PodcastsTimestamps:00:00 – Picking up where we left off00:20 – Apple’s presenter mode frustration01:02 – Why apps feel like the wrong tools02:00 – Sketch vs Photoshop: same base, different goals03:50 – The friction of switching tools04:39 – The promise of LayerOS: tool-on-demand05:33 – Why apps don’t speak well to each other06:00 – AI nodes, canvases, and image editing gaps07:03 – Why LayerOS belongs at the OS level08:00 – My learning process: sharing the build live08:34 – Multimodal work & future automation09:10 – How layers could replace apps completely09:35 – Why choosing the right app is broken09:55 – The OS as an LLM that predicts interfaces#LayerOS #FutureOfUI #AugmentedReality #AIInterface #RealityOne #Brainstream #DesignThinking #CreativeTools #SpatialComputing #NoMoreApps
There’s something deeply wrong with how we “work” on computers. Why do we still sit in desks designed for typewriters?TL;DR- Today’s UI is still based on 1980s office metaphors- “Work” no longer happens at desks — ideas strike in hammocks, cars, and showers- LayerOS = infinite canvas, generative UI, conversational AI, asynchronous agents- Paradigm shift: from sandboxed apps to intelligent, context-aware layers- This isn’t about software—it’s about rethinking the foundation of computingIn this raw brainstream, I unpack the limitations of today’s UI and introduce LayerOS — a reimagined operating system where context flows, interfaces evolve, and work is ambient, not static. From infinite canvases to asynchronous assistants and generative UIs, this isn’t a product pitch — it’s a shift in the mental model of computing itself.🚀 If you’ve ever wanted your OS to think with you instead of slowing you down, this one’s for you.👉 Listen in and tell me what you think — where is work heading in your world?Timestamps:00:00 – The flaws in today’s UI paradigm00:21 – Why the desktop metaphor no longer makes sense00:48 – What is work now? Where is it happening?01:40 – Computers should serve us — not vice versa02:33 – Creativity doesn’t happen at the desk03:44 – Asynchronous work and background processing04:29 – Intro to LayerOS and its core ingredients06:05 – Why these ideas keep recurring in Brainstreams07:25 – Bringing theoretical frameworks into design08:16 – The app model is outdated09:33 – Documents, tools, and workflow fluidity09:56 – Final thoughts on an iterative interface future🌐 Listen to layer[1] on Website, Spotify, YouTube, Apple Podcasts#LayerOS #FutureOfWork #GenerativeUI #Brainstream #ConversationalAI #InfiniteCanvas #InterfaceDesign #AIUX #SpatialComputing
What if your computer understood your workflow—before you even told it what to do?We’re moving into a world where AI doesn’t just automate tasks—it thinks like you do. Instead of manually processing workflows, imagine an operating system that learns, adapts, and automates without effort.In this Brainstream, we explore:• 🖥️ Why today’s OS models are outdated• 🔄 How AI-driven workflows eliminate manual work• 🎨 The future of creation: no more friction, just intent-based computingAI isn’t just a tool anymore—it’s a partner in creation. Let’s talk about how we get there. 🎙️👇TLDR:• AI is shifting from automation to true workflow intelligence.• The future OS won’t just react—it will predict and assist.• AI will eliminate repetitive tasks, freeing time for deep work.• Everything will be seamless: no manual processing, just AI-driven intent.• AI will personalise computing for every user, like a digital assistant.Timestamps:00:00 Why every brainstorm sparks more ideas00:18 The meta-purpose of Brainstreams00:52 How ideas evolve over time01:26 The future of AI-powered workflows02:21 Why AI should automate everything03:35 The washing machine analogy: AI as the next tool revolution04:15 How AI will eliminate tedious tasks05:13 The democratisation of automation 06:41 Future interfaces: shifting from “what” to “how”08:22 The need for AI-driven creative workflows09:39 Building the next-gen AI-powered OSListen on:Website, Spotify, YouTube, Apple Podcasts#AI #Automation #FutureOfWork #AIRevolution #Productivity #RealityOne
loading
Comments 
loading