Discover"News From The Future" with Dr Catherine Ball
"News From The Future" with Dr Catherine Ball
Claim Ownership

"News From The Future" with Dr Catherine Ball

Author: The Future Is Already Here.... Meet The Humans At The Cutting Edge

Subscribed: 1Played: 96
Share

Description

Converging and emerging technologies from today, tomorrow, and next year. Educate and entertain yourself with Dr Cath's optimistic and curious nature as we peek over the horizon.

drcatherineball.substack.com
27 Episodes
Reverse
Podcast Transcript:Welcome to News From The Future Special Editions with Dr Cath working hard at the CES in Vegas. This podcast is produced using the AI voice clone of Cath by eleven labs. Cath was so happy to be in the audience today when Abi, the aussie robot was shown on stage in the Agetech section of the massive trade show. Here is a summary of what was discussed.Abbie is an innovative companion robot created by Andromeda Robotics, conceived during the pandemic by founder Grace Brown while she was a mechatronics student in Australia feeling lonely in her dorm room during the pandemic. This experience led her to research loneliness, particularly among elderly populations, which became the driving force behind Abbie’s development. The robot represents a creative solution to address what health experts, including the U.S. Surgeon General, have identified as a critical health issue - loneliness, which can be as damaging as smoking 15 cigarettes daily.The robot serves as an emotional companion, particularly in senior living facilities where residents often face long periods of isolation despite being in a communal setting. Abbie can speak over 90 languages, enabling meaningful connections with residents who may have lost their ability to communicate in their second language due to cognitive decline. A powerful example shared was of a resident who could only speak Mandarin - Abbie became his conversation partner, leading to him sharing Chinese poetry and drawing other curious residents to observe their interactions. This unexpected outcome addressed not just linguistic isolation but also created new social connections among residents.Abbie’s design is intentionally approachable and child-sized, featuring colorful components and expressive eyes that invite engagement. The robot’s appearance evolved partly by chance - during initial development, Grace had access to various colored materials for 3D printing, resulting in a vibrant, multi-colored design that proved highly effective at engaging residents. The robot can both participate in group activities - leading music sessions, dancing, and blowing bubbles - and engage in one-on-one conversations. During group sessions, Abbie has been known to spark impromptu dance parties, with residents and staff joining in the festivities.A key feature of Abbie’s technology is its memory capability. The robot maintains detailed records of previous interactions, remembering personal details about residents to create more meaningful ongoing relationships. This can be achieved either through facial recognition technology or through staff input via an accompanying app. This memory function allows Abbie to maintain conversation continuity and show genuine interest in residents’ stories, even when they’re repeated multiple times - something that can be challenging for human caregivers managing multiple residents.The robot operates on a subscription model, currently costing around US $5,000-6,000 per month per unit, making it more practical for institutional settings where multiple residents can benefit. While primarily focused on aged care facilities now, Andromeda has broader ambitions for future applications, including potential use in hospitals and private homes. The company has already received inquiries about personal use, particularly from families interested in providing companionship for children.A next-generation version called Gabby is already being deployed in some facilities. Slightly taller than Abbie but still child-sized, Gabby incorporates additional sensors and enhanced capabilities aimed at enabling more autonomous operation within care facilities. These improvements allow Gabby to navigate facilities more independently and potentially make autonomous visits to residents’ rooms when directed by staff.The impact of these companion robots extends beyond simple entertainment or basic interaction. Staff members have reported unexpected benefits, such as learning new approaches to difficult conversations with residents. In one notable case, staff adopted Abbie’s method of discussing sensitive topics like the passing of family members with residents experiencing memory loss, finding the robot’s approach more effective than their previous methods.The technology has shown particular promise in addressing various forms of isolation - physical, mental, and linguistic. Statistics indicate that approximately 40% of nursing home residents rarely receive visitors, with many receiving none at all. Abbie helps fill this gap, providing consistent companionship and engagement during the many hours when structured activities aren’t taking place.Currently headquartered in San Francisco for their U.S. operations, Andromeda faces high demand, with a growing waitlist for their robots. The company is taking a measured approach to expansion, learning from their current deployments while working toward making the technology more accessible for individual home use in the future. Their ambitious goal is to replace a billion hours of loneliness with companionship, recognizing that while human interaction is ideal, the demographics of an aging society make additional support tools necessary.The development process for Abbie has been collaborative, with the company working closely with care facilities to refine and improve the technology. Unlike traditional deep tech development, which often involves years of research and development before market entry, Andromeda has chosen to build alongside their customers, incorporating real-world feedback into their iterations. This approach, while sometimes challenging, has allowed them to create solutions that directly address the needs of both residents and care staff.Looking ahead, Andromeda envisions expanding Abbie’s capabilities and accessibility while maintaining focus on emotional connection rather than task-based assistance. The company emphasizes that Abbie is not designed to replace human caregivers or handle medical tasks, but rather to complement existing care by providing additional emotional support and companionship during times when human interaction might be limited.Please share this with someone who likes robots, works in aged care or healthcare, or who wants to get involved with emerging technologies. Thank you Thanks for reading "News From The Future" with Dr Catherine Ball! This post is public so feel free to share it. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
Hello and welcome to news from the future where Dr Cath is running around the CES in Las Vegas and dropping the news as she goes. I am her voice clone, created by elevenlabs, thanks for listening.Here is the big one- the presentation by Jensen Huang, the CEO and Founder of NVIDIA. Take notes...The computer industry is experiencing an unprecedented transformation, with two major platform shifts occurring simultaneously: the rise of artificial intelligence and the evolution of accelerated computing. This marks a departure from historical patterns where platform shifts happened sequentially, roughly once per decade, such as the transitions from mainframe to personal computers and then to the internet era. These transitions have historically reshaped how we interact with technology, but the current dual shift represents a fundamental reimagining of computing itself.What makes this current transformation particularly remarkable is its comprehensive nature. The entire computing stack is undergoing reinvention, fundamentally changing how software is created and executed. Instead of traditional programming methods, software is increasingly being trained through AI systems. Applications are no longer simply precompiled but are generated contextually, responding to specific needs and circumstances. This shift has triggered a massive reallocation of resources, with trillions of dollars being channeled into AI development and infrastructure, representing one of the largest technological investments in history.The evolution of large language models (LLMs) represents a crucial milestone in this transformation. The introduction of models like BERT and ChatGPT has demonstrated the powerful capabilities of AI in understanding and generating human-like text. These models have revolutionized natural language processing, enabling computers to understand context, nuance, and complex linguistic patterns in ways that were previously impossible. Perhaps even more significant is the emergence of agentic systems – AI that can reason independently and interact with various tools and environments. This development has opened new possibilities for AI applications across numerous sectors, from healthcare to finance to environmental protection.The democratization of AI technology has been greatly facilitated by the advancement of open models. These accessible frameworks have enabled global innovation, allowing developers and organizations worldwide to build upon existing AI capabilities and create new applications. This openness has accelerated the pace of AI development and fostered a more inclusive technological ecosystem. The availability of open models has particularly benefited smaller organizations and developing nations, providing them with access to sophisticated AI tools that would otherwise be beyond their reach.NVIDIA’s contribution to this transformation is particularly noteworthy through their development of AI supercomputers, especially the DGX Cloud. This platform represents a significant step forward in providing the computational power necessary for advanced AI development. The DGX Cloud combines cutting-edge hardware with sophisticated software frameworks, enabling researchers and developers to train and deploy complex AI models more efficiently than ever before. NVIDIA has demonstrated its commitment to the open model approach by building systems and libraries that support broad AI development efforts, fostering collaboration and innovation across the industry.The applications of these technological advances extend far beyond traditional computing domains. In digital biology, AI is being used to understand complex biological systems and accelerate drug discovery, potentially revolutionizing how we develop new treatments for diseases. Weather prediction has become more accurate and detailed through AI-powered modeling, enabling better preparation for extreme weather events and improved climate change analysis. The integration of AI into robotics has created new possibilities for automation and physical world interaction, with a particular emphasis on understanding and applying physical laws to improve AI applications.A significant milestone in this journey is the introduction of the Vera Rubin supercomputer. This system represents the next generation of AI computing architecture, designed to meet the escalating demands of artificial intelligence applications. The Vera Rubin system incorporates innovative chip designs and networking technology that enable high-speed data transfer and processing, essential for handling the increasingly complex requirements of AI computation. Its architecture has been specifically optimized for AI workloads, representing a departure from traditional supercomputer designs.The networking capabilities of modern AI systems are particularly crucial. High-speed data transfer and processing are fundamental to the performance of AI applications, and innovations in networking technology have made it possible to handle the massive data flows required for advanced AI operations. These networks must maintain extremely low latency while managing enormous amounts of data, requiring sophisticated engineering solutions and new approaches to data center design. This infrastructure supports the development of more sophisticated AI applications that can process and analyze data at unprecedented speeds.The impact of these developments extends across industries, creating new opportunities and transforming existing business models. AI applications are becoming more capable of complex reasoning, learning from experience, and interacting with the physical world in meaningful ways. This evolution is not just about improving computational efficiency; it’s about enabling entirely new categories of applications and solutions that were previously impossible or impractical to implement.The role of companies like NVIDIA in this transformation goes beyond hardware provision. Their comprehensive approach encompasses the entire AI ecosystem, from developing sophisticated hardware architectures to creating software frameworks and supporting application development. This holistic strategy is essential for advancing the field of AI and ensuring that the technology can be effectively deployed across different sectors. The integration of hardware and software development has become increasingly important as AI systems become more complex and demanding.The future of AI and computing appears to be moving toward increasingly sophisticated systems that can handle complex reasoning tasks while maintaining efficient interaction with the physical world. This evolution suggests a future where AI systems will become more integrated into our daily lives, supporting decision-making processes and enabling new forms of human-machine collaboration. The development of these systems requires careful consideration of both technical capabilities and ethical implications.The emphasis on physical world understanding in AI development is particularly significant. As AI systems become more advanced, their ability to comprehend and interact with the physical environment becomes increasingly important. This understanding is crucial for applications in robotics, autonomous systems, and other fields where AI must interface with the real world. The development of AI systems that can effectively operate in physical environments requires sophisticated sensors, advanced algorithms, and robust safety mechanisms.The investment in AI infrastructure and development represents a significant bet on the future of computing. The trillions of dollars being redirected toward AI development indicate the industry’s confidence in this technology’s potential to transform how we interact with computers and how computers interact with the world. This investment is funding not only hardware and software development but also research into new AI architectures and applications.The transformation of the computing industry through AI and accelerated computing is creating new possibilities for solving complex problems and enabling innovations that were previously impossible. These advances are particularly important in fields such as scientific research, where AI can help process and analyze vast amounts of data, leading to new discoveries and insights. The combination of AI and accelerated computing is opening new frontiers in business operations and everyday applications, suggesting that we are at the beginning of a new era in computing history.The impact of these technological advances extends to environmental sustainability and resource management. AI systems are being used to optimize energy consumption in data centers, improve renewable energy integration, and develop more efficient transportation systems. These applications demonstrate how AI can contribute to addressing global challenges while driving technological innovation.The development of AI systems also raises important considerations about data privacy, security, and ethical use of technology. As these systems become more powerful and widespread, ensuring their responsible development and deployment becomes increasingly critical. The industry’s focus on open models and collaborative development helps ensure transparency and accountability in AI development.The convergence of AI and accelerated computing represents a pivotal moment in technological history, comparable to the introduction of personal computers or the rise of the internet. This transformation is reshaping not only how we develop and use technology but also how we approach problem-solving across all sectors of society. As these technologies continue to evolve, their impact on our world is likely to become even more profound and far-reaching.WOW just a start then... I will be unpacking Jensen’s presentation for the next few weeks. Thanks for listening and please share with anyone you know who cares about AI and the future.Thanks for reading "News From T
Hello there and welcome to the continuing special edition podcasts from the CES in Vegas. I am the voice clone of Dr Cath, thanks for joining me. Just before the big NVIDIA announcements from Jensen Huang there were some panels, here is the second one, and it is with the CEO of Mercedes Benz no less.Enjoy. and you might need to take notes.The intersection of autonomous driving and robotics technology is experiencing a transformative period, as highlighted in a recent discussion between Mercedes-Benz CEO Ola and Skilled AI’s Deepak. Their conversation revealed both the remarkable progress and significant challenges facing these interconnected fields.Mercedes-Benz’s journey in autonomous driving spans four decades, beginning with their pioneering “Prometheus” project in the 1980s. This long-term commitment has culminated in their current Level 3 autonomous system, which represents more than just technological advancement – it marks a fundamental shift in responsibility from human to machine. This transition carries profound legal and liability implications, as the computer system, not the driver, becomes legally responsible when autonomous features are engaged.The immediate future of autonomous driving, according to Mercedes, centers on their “Level 2++” technology. This system delivers point-to-point navigation capabilities that Ola describes as making the vehicle feel like it’s “on rails.” The technology has been successfully demonstrated in challenging environments, including San Francisco’s complex urban traffic patterns and freeway systems. This represents a strategic stepping stone toward full Level 3 and 4 autonomy, allowing for real-world deployment while more advanced systems continue development.A critical insight emerged regarding the “99% problem” in autonomous development. While achieving 99% functionality in controlled conditions is relatively straightforward, the remaining 1% – comprising rare edge cases and unexpected scenarios – presents the most formidable challenge. This final percentage requires extensive safety engineering, massive data collection efforts, and sophisticated decision-making algorithms capable of handling unprecedented situations.Mercedes-Benz emphasizes a comprehensive approach to autonomous system development, focusing equally on hardware and software components. Their strategy mirrors aviation industry standards, where redundancy is non-negotiable. This philosophy becomes particularly complex when scaling across different vehicle platforms, as each model requires unique sensor configurations and specialized AI model adaptations. The challenge intensifies when considering the need to maintain this redundancy while meeting commercial cost targets and managing platform proliferation.In the robotics domain, Skilled AI presented an ambitious vision for a universal robotic “brain” – an AI system capable of controlling various robot types, from humanoid machines to industrial arms and autonomous mobile robots. This approach challenges traditional robotics programming paradigms by suggesting that a single, general-purpose AI system could learn from and adapt to different robotic platforms and tasks. The potential advantage of this approach lies in creating a data flywheel effect, where learning from diverse robot experiences contributes to overall system improvement.The discussion delved deep into the ongoing debate about robotics data sources, examining three primary approaches: world-model/video pretraining, sim-to-real/reinforcement learning, and direct robot data collection. Deepak argued that unlike language models, which benefit from vast internet-scale training data, robotics faces unique challenges in data acquisition. He emphasized that merely observing tasks (like watching videos) isn’t sufficient for skill development, proposing instead a hybrid approach combining human demonstration videos, simulation training, and real-world task-specific data collection.Manufacturing automation emerged as a particularly promising application area. Ola suggested that AI-driven robotics could deliver the most significant productivity improvements in factory operations in up to a century. Rather than pursuing full automation, the vision focuses on collaborative “robot buddies” working alongside human workers. This approach includes leveraging digital twin technology, such as envidia’s Omniverse, to simulate and optimize production processes before physical implementation, potentially reducing costs and improving quality control.Several significant tensions emerged during the discussion. While optimism exists about achieving Level 4/5 autonomy, practical challenges around safety validation and regulatory compliance could extend development timelines. The balance between implementing robust sensor redundancy and maintaining commercial viability remains a point of contention. Questions persist about the most effective approach to robotics data acquisition and training methodologies.The workforce impact of increased automation presents another area of tension. While the speakers emphasized human-robot collaboration and productivity enhancement, concerns about potential job displacement remain. The “robot buddy” concept attempts to address these concerns by positioning automation as augmentation rather than replacement, though questions about long-term workforce implications persist.The discussion highlighted a fundamental challenge in both autonomous driving and robotics development: balancing market pressure for rapid deployment against the need for robust, safe systems. As Ola emphasized, there are “no shortcuts” in developing these technologies, yet competitive pressures often push for faster deployment schedules.This conversation raises crucial questions about the role of accelerated computing in autonomy, strategies for cost-effective redundancy, approaches to handling edge cases, simulation-to-reality transfer, and the practical benefits of digital twin technology. These topics represent key areas where further development and discussion are needed to advance both autonomous driving and robotics technologies. The intersection of these challenges with commercial viability, regulatory compliance, and workforce implications will likely shape the development trajectory of these technologies in the coming years. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
Welcome to News from the Future Special Editions with Dr Cath dialling in from the CES in Las Vegas... and today was all about Jensen Huang and the big NVIDIA announcement. But before that we had 2 panels chatting away and so here summarised are some of the main points for those chats. Enjoy and you may want to take notes... there is a lot!The AI infrastructure landscape is experiencing unprecedented growth, with approximately $800 billion invested over the past three years and projections of $600 billion more by 2026. While media headlines frequently question whether this represents a bubble, industry experts argue this cycle is fundamentally different from previous tech booms for several key reasons. The seamless adoption of tools like ChatGPT, reaching billions of users instantly, combined with consistently high utilization rates and cash flow-funded expansion, suggests a more sustainable foundation than previous tech cycles.Unlike the dotcom era’s “dark fiber,” today’s AI infrastructure shows consistently high utilization rates. Even older GPU hardware remains fully employed, processing various workloads from traditional computing tasks to smaller AI models. This high utilization, combined with well-financed buyers funding expansion through cash flow rather than speculation, suggests a more sustainable growth pattern. The industry emphasizes watching utilization as a leading indicator, rather than focusing on abstract return on investment calculations.Snowflake CEO Sridhar Ramaswamy provides compelling evidence of AI’s real-world value, particularly in high-wage workflows. When AI tools enhance the productivity of well-paid professionals like developers or analysts, the return on investment becomes readily apparent. Snowflake’s implementation of data agents, allowing executives to quickly access customer insights from their phones, demonstrates how AI can deliver immediate value in enterprise settings. The company’s Artificial Intelligence products, including Snowflake Intelligence, run on envidia chips, highlighting deep collaboration between infrastructure providers and application developers.Enterprise adoption faces several practical challenges beyond mere interest or budget constraints. Data governance and sovereignty emerge as critical concerns, with companies increasingly sensitive about where their data is processed and stored. This has led to interesting dynamics where local GPU availability becomes a negotiating point – for instance, when German workloads might need to be processed in Swedish facilities. Change management presents another significant hurdle, as organizations struggle to drive user adoption of new AI workflows. However, widespread consumer experience with AI technologies through smartphones and laptops is making enterprise adoption easier for companies that execute well.The global infrastructure buildout is increasingly viewed as a feature rather than just capacity expansion. As geopolitical tensions rise, the ability to process data within specific regions becomes a competitive advantage. This has spurred infrastructure development across the Middle East and Asia, creating a more distributed computing landscape that better serves local sovereignty requirements and regulatory compliance needs.In the ongoing debate between open and closed AI models, a nuanced picture emerges. While frontier models from leading companies maintain significant advantages in specific use cases like coding and tool-agent loops, open models are gaining importance for large-scale applications. The open-source ecosystem’s ability to attract developers and drive innovation mirrors historical patterns in data center development. This dynamic is particularly important when considering massive-scale deployments where cost and customization flexibility become critical factors.Sector-specific adoption shows interesting patterns. Financial services, particularly asset managers with fewer regulatory constraints than traditional banks, are leading the charge. Healthcare emerges as a surprising second frontier, with doctors increasingly turning to AI to address overwhelming documentation requirements. Unlike previous technology waves, enterprise-specific AI applications are developing in parallel with consumer tools, rather than lagging behind. This represents a significant shift from the Google Search era, where enterprise search solutions never gained the same traction as consumer offerings.The concept of “dark data” – unutilized information assets within enterprises – represents a significant opportunity. Companies like Snowflake emphasize the importance of making this data accessible while maintaining strict governance controls. A practical example involves decades of contracts stored in SharePoint systems, currently requiring manual searching but prime for AI-enabled retrieval and analysis. The challenge lies in creating drag-and-drop usability while ensuring unauthorized access doesn’t create regulatory compliance issues.Vertical-specific implementations reveal how AI adaptation varies by industry. In healthcare, companies like Abridge focus on integrating AI into existing workflows, aiming to reverse the current reality where doctors spend 80% of their time on clerical work and only 20% with patients. Their approach emphasizes fitting AI into existing processes rather than forcing workflow changes, while balancing privacy, security, and latency requirements. They utilize techniques like distillation, fine-tuning, and learning from clinician edits at scale to improve their systems.In software development, CodeRabbit positions itself as a trust layer between coding agents and production systems, highlighting how AI is changing the nature of software development rather than replacing developers. They argue that as code generation improves, review and intent specification become the primary bottlenecks. The platform suggests that AI is lowering barriers to entry in software development while questioning whether it truly transforms highly skilled developers into substantially more productive ones.The current state of AI infrastructure investment is frequently compared to early stages of previous platform shifts, such as the iPhone or PC eras. Mark Lipacis argues we’re in “early innings,” where investment must precede currently unknown workloads – though unlike previous cycles, current infrastructure already shows high utilization. This perspective suggests that current investment levels, despite their scale, may be justified by future applications and use cases that haven’t yet emerged.Several tensions remain unresolved in the industry. The durability of current utilization rates faces questioning, particularly whether they represent a temporary land-grab or sustainable demand. Agent reliability remains a challenge, especially for long-running or background tasks, with most successful implementations requiring human oversight. The sustainability of open-source model development, given high training costs, remains uncertain despite recent progress. The debate between centralized efficiency and data sovereignty requirements continues to shape infrastructure deployment decisions.The impact on workforce dynamics presents another area of debate. While some fear job displacement, evidence from the software development sector suggests AI is lowering barriers to entry and enabling more people to participate in technical fields. The panel concludes optimistically, suggesting that software creation will expand beyond traditional engineering roles, with examples of children using coding agents to build applications indicating a more democratized future for software development. This democratization of technology creation could fundamentally reshape how software is developed and who participates in its creation.This podcast was produced using Dr Cath’s AI Voice Clone from Eleven Labs. Thank you for listening. Please share with anyone you know who is interested in AI This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
Hello and Welcome to another special edition of News From The Future with Dr Cath as she beams in live from the CES technology show in Las Vegas. The LEGO Group unveiled a revolutionary innovation at CES - the LEGO Smart Brick, representing the most significant advancement in LEGO technology since the Minifigure’s introduction 50 years ago. This new platform seamlessly integrates digital technology into physical LEGO play without screens or power buttons, maintaining the core essence of hands-on creative building while adding responsive interactive elements.The Smart Brick appears as a standard 2x4 LEGO brick but contains sophisticated sensors and technology packed into a silicon chip smaller than one of its studs. The system consists of three key components that work together: the Smart Brick itself, which can be reused across different models; Smart Tags containing code that defines how models respond to interactions; and Interactive Smart Minifigures programmed with distinct personalities and behaviors.The technology demonstrates remarkable capabilities in bringing LEGO models to life. The Smart Brick generates responsive sounds based on movement and interaction, with synthesized audio that adapts to how children play. Advanced position sensing allows bricks to detect their relative locations in three-dimensional space, enabling precise distance measurements and directional awareness between multiple Smart Bricks. Color sensors let models recognize their environment and respond accordingly, while networked play capabilities allow multiple Smart Bricks to communicate and coordinate their responses automatically.During the CES demonstration, these features were showcased through various interactive models. A car equipped with a Smart Brick produced engine sounds that responded to movement, complete with acceleration noises and tire screeching effects. The system could detect when Minifigures were placed in different positions - as drivers, passengers, or even (somewhat mischievously) under the wheels. A LEGO duck came alive with appropriate quacking and splashing sounds, while demonstrating sleep behaviors when at rest.The technology enables entirely new dimensions of play. Vehicles can respond realistically to steering and acceleration, while knowing their position relative to other vehicles or obstacles. Characters gain awareness of their surroundings and can react appropriately to different situations. Models understand how they’re being played with and can coordinate responses between multiple Smart Bricks. This allows for racing games where cars know who’s in the lead, creatures that respond to care and interaction, and buildings that can detect and react to events happening around them.A key aspect of the Smart Brick’s design is its ability to work as a platform rather than just a single product. The same brick can be moved between different models, each time taking on new behaviors based on the Smart Tags included in the build. Multiple Smart Bricks can form decentralized networks, automatically coordinating to create rich interactive experiences across entire LEGO worlds that children create.The first commercial implementation of this technology comes through LEGO’s partnership with Star Wars, building on their 25-year collaboration that has already produced nearly 1,500 unique minifigures and countless beloved sets. The initial launch in March 2026 will feature three Smart Play sets: Luke Skywalker with X-Wing, Darth Vader with TIE Fighter, and the Emperor’s Throne Room. These sets demonstrate how the technology can enhance storytelling and imaginative play within the Star Wars universe.During the presentation, Disney Chief Brand Officer Asad Ayaz and Lucasfilm Chief Creative Officer Dave Filoni emphasized how this technology represents a natural evolution in their long-standing partnership with LEGO. They drew parallels between George Lucas’s pioneering use of special effects and sound design in the original Star Wars films and this new innovation in toy technology. The Smart Brick platform aims to similarly transform how children experience and interact with their LEGO creations.The LEGO Group emphasized that this launch represents just the beginning of the Smart Brick platform’s potential. The technology has been designed to be open-ended and expandable, integrating seamlessly with the existing LEGO system while adding new dimensions of interactive play. The company expects the platform to evolve based on how children use and innovate with it, potentially expanding into thousands of different models and play experiences.The development of the Smart Brick was driven by observing how children play in today’s digital world. While kids remain naturally creative and imaginative, they increasingly engage with digital experiences through screens and devices. The Smart Brick aims to bridge this gap by bringing technological interactivity into physical play, without losing the hands-on creative building that has defined LEGO for over 70 years.This builds on LEGO’s fundamental principle of unlimited creativity. The Smart Brick maintains this philosophy of open-ended play while adding new possibilities for interaction and responsiveness. The technology doesn’t prescribe specific ways to play but rather provides tools that children can use to enhance their own creative storytelling and imaginative adventures.The presentation included practical demonstrations of the technology’s capabilities, including a simple racing game where Smart Bricks could determine which duck-on-skateboard was closest to a trophy. This showcased how the position-sensing technology can enable new forms of competitive play while maintaining the physical, hands-on nature of LEGO building. The demonstration also featured Star Wars characters like Chewbacca interacting with the Smart Brick, producing characteristic roars and responses that brought the character to life.Throughout the CES presentation, LEGO emphasized how the Smart Brick represents not just a new product but a platform for future innovation. By creating a system that seamlessly integrates digital interactivity with physical play, while maintaining compatibility with existing LEGO bricks and sets, they’ve laid the groundwork for potentially thousands of new play experiences. The technology’s ability to network multiple Smart Bricks together, sense their environment, and respond to children’s play patterns suggests numerous possibilities for future development and expansion of the platform.This podcast was produced with Dr Cath’s AI Voice Clone by Eleven Labs. Thanks for Listening. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
Hi, Welcome to News From the Future - CES 2026 special edition series. Dr Cath will be bringing you the highlights from the 2026 show in Las Vegas, and you can subscribe on her substack for more goodiesThe CES 2026 preview revealed transformative technology trends reshaping our world through three key mega-trends: intelligent transformation, longevity technology, and engineering tomorrow. The global technology and durables market is projected to reach $1.3 trillion by 2026, with the US consumer technology industry expected to generate $565 billion in revenue, showing 3.4% retail growth despite economic headwinds.Intelligent transformation represents the next evolution beyond digital transformation, driven by artificial intelligence’s expanding capabilities. Research shows AI adoption is accelerating rapidly, with 63% of US workers already using AI tools, saving an average of 8.7 hours weekly. The AI ecosystem is developing across multiple frontiers: agentic AI that takes initiative rather than just responding to prompts, vertical AI specialized for specific industries, and industrial AI embedded directly in infrastructure and manufacturing.Physical AI, or robotics, is making significant strides. Companies like K Humanoid Alliance are developing next-generation humanoids with improved motion and language capabilities. Consumer robots are becoming more sophisticated, with iRobot adding robotic arms to vacuum cleaners for complex cleaning tasks. Autonomous vehicles continue advancing through companies like Tensor Auto, Valeo, and Waymo, using AI simulation for enhanced navigation and safety.The evolution of smart glasses and XR headsets from companies like Even Realities and XReal demonstrates how intelligent vision technology is becoming more practical and fashionable. These devices now offer features like simultaneous translation, hands-free interactions, and industrial applications including remote surgery support and warehouse optimization.Everyday devices are transforming into intelligent platforms delivering personalized experiences. Samsung’s Galaxy AI smartphones, Lenovo’s AI PC’s, and LG’s AI-enabled TVs showcase how common devices are becoming more adaptive and responsive to individual needs. The smartphone is evolving into an intelligent personal assistant, while TVs are becoming interactive smart home hubs.In the longevity technology space, innovations are focused on improving both lifespan and quality of life. The GLP-1 ecosystem is expanding beyond weight management to influence diabetes care, sleep health, and the food industry. Precision medicine is enabling personalized healthcare through genomics and AI-powered analytics, while remote care technologies extend medical monitoring beyond clinical settings.Digital health innovations are revolutionizing personal wellness monitoring. The Withings Body Scan 2, for example, provides comprehensive health metrics including body composition, ECG readings, and vascular age assessments. Health technology is evolving across three dimensions: triage (AI-powered symptom assessment), management (remote monitoring devices), and empowerment (giving patients control over their health data).Accessibility technology is receiving increased attention, with innovations like HumaninMotion Robotics’ exoskeletons and AI-enabled hearing assistance devices. Mental health solutions are advancing through voice biomarker detection for early diagnosis of conditions like depression and anxiety, while wellness tech is optimizing sleep and nutrition through precision monitoring.Smart home technology is becoming more sophisticated, with companies like Tuya and SoftatHome leading innovations in AI personalization, health monitoring, energy management, and security standardization. The Matter protocol is enabling better device interoperability, strengthening overall home system integration.The future of video and audio entertainment is evolving significantly. Ad-supported streaming is becoming the primary growth driver, while short-form video content continues to shape audience engagement. AI is transforming video production through automated editing and virtual production tools. Audio streaming platforms are expanding beyond music to become complete audio ecosystems incorporating podcasts and audiobooks.Gaming technology is advancing through AI-powered development tools, improved performance capture, and evolving live service models. These changes are creating more immersive experiences while redefining how games are monetized and maintained over time.Engineering tomorrow encompasses breakthrough solutions for mobility, power, food production, and infrastructure. In automotive innovation, vehicles are becoming software-defined platforms with extensive personalization capabilities. Companies like BlackBerry QNX and Qualcomm are advancing vehicle systems through improved sensing, connectivity, and computing power. Autonomous driving is becoming reality, demonstrated by Waymo’s self-driving vehicles operating in urban environments.Heavy equipment and construction are being transformed through AI and automation, with companies like Caterpillar and Doosan developing more efficient and safer solutions. Agricultural technology is advancing through autonomous tractors and precision farming tools, while food technology innovations are improving supply chain efficiency through vertical farming and AI-powered analytics.Energy systems are evolving to meet growing power demands while transitioning to sustainable solutions. This includes accelerated electrification across industries, grid modernization with AI-powered management, and experimentation with new power sources including hydrogen and next-generation nuclear technology. Companies like Elevate Energy are showcasing innovative power generation methods, while Hyundai and KHNP are demonstrating future city power solutions.Digital infrastructure continues to advance through blockchain security, quantum computing developments, and AI integration. The new CES Foundry showcase highlights these cutting-edge technologies and their potential applications across industries.CES 2026 demonstrates how these technological innovations are not just theoretical possibilities but practical realities being implemented across multiple sectors. The convergence of AI, robotics, health tech, and sustainable solutions is creating a more connected, efficient, and personalized technological ecosystem that will shape how we live, work, and interact in the coming years.This podcast was created with Dr Cath’s Voice Clone on Eleven Labs. Thanks for listening.Thanks for reading "News From The Future" with Dr Catherine Ball! This post is public so feel free to share it. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
Speak Fearlessly in the Age of AIThere’s a moment just before you speak when time slows.Your breath catches, your heart reminds you that it’s there, and a tiny voice whispers, “What if I get it wrong?”That moment is ancient. Humans have felt it for thousands of years — from fireside storytellers to TED speakers. But something new is happening. Our audiences are changing. Some are in the room, some are on screens, and some might be digital avatars or even AI agents listening, recording, and learning from us.The future of communication is hybrid, intelligent, and sometimes a little intimidating.So I wrote a book for that world. And you can watch my AI Avatar read the introduction in the video above. Introducing Vox Helix: Amplify Your Speaking with AI and AuthenticityAfter more than a decade on global stages, I wanted to build a bridge between what I’ve learned as a professional speaker and what technology is making possible.Vox Helix is a practical, human guide to finding your voice in an AI-shaped future.Inside, you’ll learn how to:* Use AI tools to rehearse smarter, not harder.* Turn nerves into data, and data into confidence.* Train avatars and digital doubles that still feel human.* Connect with audiences across cultures, languages, and realities.* Protect your ideas, your identity, and your voice in a connected world.Each chapter blends science, storytelling, and real exercises. The 30-day challenge inside the book walks you step-by-step through small, daily actions that help you master both your message and your mindset.Why This Matters NowPublic speaking is still the number-one fear for most people.AI will not fix that - but it can help.It can hold up a mirror, provide real-time feedback, and help us rehearse without judgement.What it cannot replace is sincerity.That is where you come in.The best communicators of the next decade will be those who know how to collaborate with machines while sounding unmistakably human. Vox Helix is written to help you become one of them.The eBook gives you instant access, perfect for reading on flights or before a keynote.➡️ Get Vox Helix on Amazon (global eBook)Set your voice free. Learn the tools. Do the work. And speak fearlessly.A Personal NoteThis book came together with a lot of late-night coffee, a few philosophical chats with AI, and a decade of conversations with audiences around the world.I hope it gives you courage; not only to use new technology, but to stay present and human while you do.The microphone is waiting.The future is listening.Dr Cath"News From The Future" with Dr Catherine Ball is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
A biodiverse episode of NFT with the wonderful Natalie Kyriacou, OAM, and what the future holds with regard to greenwashing, citizen science, politics, and the biodiversity we still have left…We now have a global focus and a political will around environmental protection, the benefits transfer values of ecosystem services, the protection of zones rather than just cuddly species, and the significant positivity conferred by biodiverse systems and wild places.Systems approaches, Gaia theory, and new cybernetic lenses - these are the future of conservation and biodiversity. Oh and whale poo. Connect with Natalie on LinkedIn here. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
What’s the invisible reach of the apps we have all let into our lives. When technology is working it is invisible… until celebrities complain would we even know if there was overreach or a breach of trust?Professor Johanna Weaver is a leader in this space, and is charging forward on all matters of tech regulation and communication. The Tech Policy Design Centre at the Australian National University is a really great resource. https://techpolicydesign.auTechnology should be shaped for the benefit of society and this means great regulation and open communication. Follow Prof Weaver on Linkedin here. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
Would you have a robot do your fake eyelashes? Are you willing to trust a robot massage bed? Can AI be used to forecast new materials and textiles to make the fashion industry more sustainable and working in a circular methodology.More data means better answers from AI. This could be the key to fixing the ‘broken’ fashion industry. Do you wear fast fashion?Climate change related risks to water security will have direct effects on how we make materials. AI can also be used as part of fashion design and can even shake up the traditional creative brand! Can we democratise fashion design?This is a long chat with a great balance of geek and chic… Enjoy!Connect with Michelle on LinkedIn here This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
The term AI might be controversial but what’s more interesting to me are the opportunities from ‘orphan’ industries, that seem to have been left behind by technology, but which are actually intrinsically involved in the democratisation of creativity.Generative AI is a toddler in general terms, but has been around for quite some time, and those who are riding the wave first are creating the stage for the next Industrial Revolution. Jessie has many skills as well as a degree in design and screen writing, and is now leading the charge at Leonardo AI to engage with and run alongside the creative industries. Connect with Jessie on LinkedIn here. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
Surrounded by good intentions around STEM engagement and literacy it seems we are not really shifting the dial. What can we do more of and what can we do better to make sure we have the skills and talent pipelines stacked and packed for the future needs across Australia.Learn more about Inspiring Australia as a national programme, as well as the different array of events and opportunities people can engage with.In your 50s and want to pivot into a new career? It’s totally possible. Learn more about Inspiring Australia here. Federal Gov LinkConnect with Alison here. LinkedIn This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
We never talk about nanotechnology. Here’s Richard Feynman’s famous lecture on the idea of nano machines being able to manipulate atoms… Plenty of Room at the BottomThere has been a quiet technological revolution happening out of the spotlight and Jiawen Li’s work in Adelaide is world leading research on manipulating the smallest of sensors to access our most internal places, i.e. the very chambers of our hearts.Nanotechnology and heart disease- did you know they were directly related? Learn more on this week’s podcast.You can learn more about Jiawen here: https://researchers.adelaide.edu.au/profile/jiawen.li01 This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
Dialling in from the Caribbean and inside the world of Web3, AI, Brand, and Relationships comes the awesome Katerina and her works with her start up Immersifi.Learn more here Immersifi on Instagram and here Immersifi WebsiteWhat is Web3? What is Blockchain? How can this make my shopping experience fruitful and happy? How do I get a body scan? What clothes size am I in which fashion brand?Could this all be a “fast fashion”-killer?Connect with Katerina on LinkedIn here. Katerina's LinkedIn This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
As we continue to ride the shockwaves of a scaling climate crisis, are there still any good news stories in the data or research? The answer is: we still have hope.New and Emerging technologies are building a new paradigm of data and impact around measuring and mitigating emissions. What are the economic values of Mother Nature’s services, and how can the invisible hands of economics help fight climate change? Carbon accounting, data, information, and benefits transfer as well as new sequestration technologies are all converging to form new business models.For more information see https://www.melaniezeppel.com for more information and find her on LinkedIn here: https://www.linkedin.com/in/melanie-zeppel-75249976/ This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
The Smithsonian Institution is a collection of 21 museums and a Zoo…. at the Natural History Museum they have 148 MILLION items in their collection, with 4.5 Million visitors each year plus thousands of scientific collaborations. Rebecca is originally from South Australia, and is now visioning what science of the future for museums should be from a global stage (what a career path!!!!)….What is the future for collections and museums? How can we better use the work of amazing archivists and digitise to future proof the collection?The biggest question of course is: Does the museum come alive at night?Check out this podcast to find out!https://naturalhistory.si.eduhttps://www.linkedin.com/in/dr-rebecca-johnson/ This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
“Trying to understand how our own brains work is like trying to bite our own teeth” according to the late, great Michael Crichton. But what can future technology teach us about ourselves that we can’t figure out for ourselves?Did you know all the things our brains are responsible for? We are learning more all the time, and it is unlocking the very foundation of who we are and how we exist.Meet one of Australia’s rising stars of science, Lila Landowski and find out more about her and her works here: https://discover.utas.edu.au/Lila.Landowski This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
Learn about exponential intelligence, Transhumanism, and ask whether IQ will even matter into the future…. (clue: likely not as much)How do we build in skills for the future when the curriculum already seems too late? But, this should not be a constant burden for teachers. Who should be responsible for the speed of Silicon Valley being taught in schools?Find out more about Jeanette at www.startwithhex.com This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
Alex is a marine ecologist and a science communication guru with a passion about using ecosystem services to fight anthropogenic climate change.The teams Alex works with are diverse and deeply technical, this broad depth and wide horizon gives her a multi-faceted point of view when it comes to communicating complicated systems-thinking research.How do we get inventions from the lab to the factory? Industry connections are needed from inception. Global competitive edge in Australia needs a boost - what can we do better and where are we going a great job on the global stage.And algae - we really need to talk more about algae - we need to start wearing algae based silk!You can learn more and connect with Alex here: https://profiles.uts.edu.au/Alexandra.ThomsonThank you for reading "News From The Future" with Dr Catherine Ball. This post is public so feel free to share it. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
Seeing gaps and opportunity in the systems with the advances in AI and the surge in disruption. But, though we appear to be rushing to change, are we all just swimming fast to actually stand still and return to BAU? Claudine loves acceleration and democratisation of AI backed technologies.What happens to industry when it is disrupted by AI that is powered by quantum computing? How can companies catch up with new tech? Do we even as a business ecosystem understand the basics of data science?Data will be the foundation stone of all the AI and processes going forwards. As business leaders are starting to pay more attention to data within and beyond their own businesses let’s watch the growth and opportunity. Don’t get left behind- start investing in these skills and capabilities now.Find out more about Claudine and her really interesting work here: https://www.oando.com.au This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
loading
Comments