Discover
Mindful AI
10 Episodes
Reverse
Our guest, Anuraj Gambhir, is an internationally recognized Strategic Business/Startup Advisor, Technology Visionary, Exponential Thought Leader and multi-award winning Innovator. He has over 30 years’ global experience across 5 continents and is a trans-disciplinary expert. His practical knowledge spans executive management, innovation, entrepreneurship, conscious leadership, exponential technologies, design thinking and holistic wellbeing. In this episode, Anuraj envisions AI used to elevate humanity through enhancing wellbeing, intelligence, and longevity, while harmonizing technology and spirituality. The right mindset and intentions are crucial before applying any technological tools. At the same time we need to make time for digital detoxes, mindfulness and nature connection to tap into your true self. The key is finding equilibrium between using AI consciously as a tool, while retaining our humanity. The technology itself is neutral - it depends on how we choose to apply it, either for personal development or societal betterment.
Mindful AI’s guest, Ruth Marshall works on real-world solutions for data privacy, Privacy Enhancing Technologies and frameworks and methodologies for the responsible use of data. Ruth has spent the past 25 years moving between collaborative research and corporate communities, with a background in software and Artificial Intelligence. In the earlier half of her career Ruth was responsible for product development and R&D in five software companies including Accenture, Harvey Norman, and Novartis Pharmaceuticals. She is now co-founder of a data literacy and ethics education initiative at Hocone, where she works with organisations to develop frameworks and education programs for the responsible use of data. She was also engaged by the NSW Government to outline an approach and framework for responsible data use across the organisation. In Episode 9, we chat about: Ruth’s main concerns around privacy and legitimacy (05:18): "I don't think people are even making assumptions right now about …whether the AI application that they're building, they have any legitimacy to do that. Are they the right person to do it?" Co-creation and getting input/feedback from affected groups is important for establishing legitimacy and trust. Constant feedback loops needed to flag issues. Example of concern with legitimacy - a water charity using AI to understand water access in African communities but not considering if they are the right people to be telling these communities how to organize their lives (07:45) Indigenous data sovereignty groups have long considered legitimacy an important concept regarding data and AI, stemming from illegitimate reorganization of their lives by European settlers. The need for more data literacy and ethics around data collection, preparation, provenance. Issues around representation, privacy, legitimacy of use. There's a proliferation of AI tools and models with little quality control (20:38). Lack of professionalisation and standards in AI/software engineering. No curriculum requirements or ensuring baseline knowledge. Ruth suggests we need to move towards treating it as a profession with standards (21:46). The need for balance between quality control/frameworks and not creating monopolies or barriers to entry. Favouring education over stringent restrictions. On measuring outcomes: we need to refer back to original goals, but also monitor for unintended consequences using lived experience. Borrow from practices like post-market monitoring of drugs. Models become outdated as world changes - we need ongoing external validation of algorithms, data, and real world interactions. Issues arise from changing context, not just the AI itself. Overall importance of trust, transparency, co-creation with affected groups, adapting models to changing world, and ongoing review of intended and unintended outcomes In regard to AI competence vs performance, Ruth would like to credit Rodney Brooks for the ideas she referenced - please see further Brook’s article: https://spectrum.ieee.org/gpt-4-calm-down
Chris Yeh (https://chrisyeh.com/) is the co-founder of the Blitzscaling Academy, which teaches individuals and organizations how to plan for and execute on hypergrowth. He’s also cofounder of Blitzscaling Ventures, which invests in the world's fastest-growing startups. Chris has founded, advised, or invested in over 100 high-tech startups since 1995, including companies like Ustream and UserTesting.com. He is the co-author, along with Reid Hoffman, of Blitzscaling: The Lightning-fast Path to Building Massively Valuable Companies, and the co-author, along with Reid Hoffman and Ben Casnocha, of the New York Times bestseller, The Alliance: Managing Talent in the Networked Age. Chris earned two degrees from Stanford University, and an MBA from Harvard Business School, where he was named a Baker Scholar. Chris has practical experience working with AI, including co-authoring a book called Impromptu with GPT-4 and Reid Hoffman in early 2023. As an investor, he is interested in AI companies automating tedious and routine tasks, not just the "sexiest" ideas. In this episode: Yeh says self-awareness is critical for founders to build positive impact companies. Understanding your own impact allows you to build sustainable products (04:43). Yeh says at the highest level, we need to track whether AI actually creates greater value than the status quo, because at the end of the day, all of the civilization around us is the result of surplus added value (11:10) Yeh thinks mindfulness is lacking from current AI. AI should be more aware of human emotions and mindful like Inflection AI's "PI" model (15:38). If AI was to be built with compassion and mindfulness it could provide a tremendous benefit to humanity and help people going through difficult situations (16:35). Overall there's need for AI to be designed thoughtfully with human values like mindfulness and compassion in mind, not just pure productivity. Listen to other episodes: https://zentermeditation.com/mindful-ai
Nathan Kinch is an active angel investor with 76 impact focused investments and advisor to major corporations and government departments around the world on topics like generative AI ethics, consumer data sharing, food system transformation and trust in modern information technologies. He built his first startup, a predictive analytics company seeking to help reduce certain types of injuries in elite sports. Since then he’s been an Entrepreneur in Residence, applied researcher with collaborators like CSIRO, Northwestern University and The Consumer Policy Research Centre.
Jada Andersen is the co-founder and Chief of Product at Xylo Systems, a biodiversity intelligence platform helping businesses measure, manage and report on their impact on nature. She is an ecologist, mathematician and passionate environmentalist. She’s skilled in data science, product development and ecological dynamics and is interested in building technology that supports regeneration of nature through artificial intelligence, data and analytics. Key points - Machine learning is being used to process large amounts of unstructured data like images and video from camera traps in conservation projects. This allows for faster processing than manual review - eg it enables predictive analytics to quantify risks like bushfires to endangered species. - Generative AI has potential for translating complex ecological concepts and communicating insights from biodiversity data more accessibly. - Assessing high quality biodiversity data remains a key challenge in developing useful AI models. - The training data used in AI models needs to be carefully considered, as any biases present will be magnified. Diverse, high-quality training data is important. - There should be guidelines and processes in place to check AI systems for biases before deployment, to avoid magnifying existing societal biases. Businesses have a responsibility here. - Consider what kind of world we want to live in, and how AI could help shape that positively while avoiding potential harms through bias. Examine the ethics. - Look at where AI can augment human capabilities rather than replace them, enhancing critical thinking and creativity. Some tasks still benefit from human involvement. - Be mindful of which processes we optimize with AI versus keeping a human role. Not everything needs to be automated. - Ability to trace sources and legitimize information from AI is currently lacking. Regulations could help here for transparency. - Social pressure can help encourage responsibility in developing ethical, unbiased AI aligned with human values. Collaboration is important. - The key is being proactive about minimizing harm from the start, through ethical AI development centered on human needs and values. Ongoing mindfulness and responsibility is needed. - Review whether the AI outputs and projections align with intended goals and uses. This "AI alignment" helps identify misalignment or unintended harms. More episodes at https://zentermeditation.com/mindful-ai
Our guest is Assoc Professor in Media Carl Hayden Smith. Carl is the Founder and Director of the Museum of Consciousness at Oxford University and co-founder of the Cyberdelics Society. His work and research is primarily on the relationship between technology and the human condition. Carl is focused on how to counter the Transhumanist agenda with Hyperhumanism. Carl is interested in the philosophy and social impact of AI technologies. In this episode he talks about how we should view AI as "an extra pair of hands" and focus on human-AI collaboration, how we can use AI to enhance rather than replace human creativity. He emphasises a synergistic human-AI approach that harnesses the strengths of both. Key points: - Carl advocates for "hyperhumanism" - using technology as a scaffold to enhance innate human abilities, but not becoming wholly dependent on technology (05:29). - Think carefully about what human data and behaviors we feed into AI systems - our existing biases and limitations may be perpetuated. We need more diverse cultural perspectives (09:44). - Be cautious about outsourcing too much ideation and imagination to machines - this can atrophy our own humanity. Maintain human channels for imagination and creativity (11:18). - Think more about context engineering over just generating more content - how can AI help us perceive reality in new ways? (14:42) - Use AI for real-time feedback to improve creative work, but humans should provide the final judgment. Remember that humans provide the artistic vision and intuition, not AI (21:18). - Ensure we don't atrophy human imagination - maintain time for daydreaming and slow creative process (22:28). - Be wary of using AI to just rapidly increase productivity at the cost of creative quality (22:43). - Maintain time for slow and creative thinking, don't just maximize productivity. https://www.zentermeditation.com/mindful-ai for more episodes
Lorenn Ruster is from the ANU School of Cybernetics and a responsible technology collaborator with the Centre for Public Impact. In this episode Lorenn emphasises recognising there are many possible futures, not just dystopian or utopian ones. Different people experience the present and future differently. Focusing on one 'terrifying' future risks creating helplessness, when WE can shape the future. As entrepreneurs aim to scale technology, it's important they consider what future they're working toward and the implications. How we think and talk about the future, in society and organizations, shapes what happens. Some of Lorenn's concerns include the 'mindlessness' of some tech development, the impact on marginalised groups neglected in design, and threats to human dignity from AI systems determining futures in ways affected groups don't agree with. Lorenn's advice for responsible, dignity-centered AI: 1. Start with understanding what you're actually building - this simple first step is essential for responsible development. 2. Create reflection spaces - slow down, consider impacts and stakeholders, address biases. Make these conversations valued, incentivised and team-building. 3. Bring together teams in these reflection spaces - to understand themselves, each other, and collectively what they're building. This is key for responsibility and mindfulness of consequences. Lorenn sees changing system dynamics as key to shifting responsible AI use and development. Leverage points include incentives, information flows and mindsets. Progress toward responsible AI can be indicated through dignity-lens questions, case studies, and more qualitative metrics around proximity to users and using both cognitive and intuitive intelligence. Find more episodes of Mindful AI - https://www.zentermeditation.com/mindful-ai
Dr Michael Kollo is a domain expert in financial services and the use of AI systems across a range of industries. He is the CEO of Evolved Reasoning, a company that helps organisations de-risk AI adoption. Dr Kollo received his PhD in Finance from the London School of Economics and is a frequent speaker on the impact of AI. Key points: - Technology replacing human jobs has happened before, we just have to accept it. The "Terminator fear" of superintelligent beings wiping us out is unlikely. - In developing AI, we must decide if we want to give it human-like traits or specific functional agents. Functional AI should focus on complex tasks humans struggle with, not imitating humans. As long as AI does not directly harm, imitate or commit crimes, and is used to improve lives, it should be okay. - Companies should be transparent in using AI, identify AI components, and consider customer experience. Guidelines will likely emerge naturally. - There are trade-offs between censorship and free speech with AI. Different groups may develop AI according to their values. - Users should be held accountable for responsible use. Education and enabling informed use is more effective than top-down regulation which can likely be worked around. - AI systems are relatively new, and optimize for engagement, not truth. The most engaging system may dominate. - The conversation around AI bias and ethics has evolved. It started as avoiding unintended consequences in data/algorithms, then became about using AI to solve social issues and represent marginalized groups. With large language models, the issues are nuanced - e.g. should chatbots show emotion or morality? - AI will likely be diverse, distributed and multi-purpose rather than a single system like "Skynet". - The future is hard to determine, but we can build the world we want to see. https://www.zentermeditation.com/mindful-ai for more episodes
This episode is a recording from Spark Festival’s AI&U panel on the topic of AI for Humans: Responsible, Safe and Ethical Artificial Intelligence. AI&U was a two week exploration in May 2023 of all things AI that took place in locations across Sydney's Tech Central and supported by the City of Sydney. The panel was facilitated by Matthew Newman of TechInnocens (specialises in the adoption and implementation of emerging technologies) and speakers from the panel include Bec Johnson, AI Ethics Doctoral Researcher at the University of Sydney Lorenn Ruster of ANU School of Cybernetics and the Centre for Public Impact Paul Conyngham of Core Intelligence, and director of Data Science and AI Association of Australia Didar Zowghi of CSIRO's Data61: Diversity & Inclusion in Artificial Intelligence
In this episode, we chat with Ed Cooke, Grand Master of Memory, entrepreneur, author, and co-founder of Memrise, Sparkleverse and Sonic Sphere to discuss his insights in the topic of consciousness and how it relates to the development of AI, and how we can create AI systems that are more mindful, empathetic, and human-centered. We also touch on his experiences as an entrepreneur and his journey in tech and startups and how AI is influencing the way he creates.













