DiscoverPondering AI
Pondering AI
Claim Ownership

Pondering AI

Author: Kimberly Nevala, Strategic Advisor - SAS

Subscribed: 4Played: 26
Share

Description

How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
28 Episodes
Reverse
Ilke Demir depicts the state of generative AI, deepfakes for good, the emotional shelf life of synthesized media, and methods to identify AI-generated content.Ilke provides a primer on traditional generative models and generative AI. Outlining the fast-evolving capabilities of generative AI, she also notes their current lack of controls and transparency. Ilke then clarifies the term deepfake and highlights applications of ‘deepfakes for good.’Ilke and Kimberly discuss whether the explosion of generated imagery creates an un-reality that sets ‘perfectly imperfect’ humans up for failure. An effervescent optimist, Ilke makes a compelling case that the true value of photos and art comes from our experiences and memories. She then provides a fascinating tour of emerging techniques to detect and indelibly identify generated media. Last but not least, Ilke affirms the need for greater public literacy and accountability by design.Ilke Demir is a Sr. Research Scientist at Intel. Her research team focuses on generative models for digitizing the real world, deepfake detection and generation techniques.A transcript of this episode is here. 
Professor J Mark Bishop reflects on the trickiness of language, how LLMs work, why ChatGPT can’t understand, the nature of AI and emerging theories of mind.Mark explains what large language models (LLM) do and provides a quasi-technical overview of how they work. He also exposes the complications inherent in comprehending language. Mark calls for more philosophical analysis of how systems such as GPT-3 and ChatGPT replicate human knowledge. Yet, understand nothing. Noting the astonishing outputs resulting from more or less auto-completing large blocks of text, Mark cautions against being taken in by LLM’s disarming façade.Mark then explains the basis of the Chinese Room thought experiment and the hotly debated conclusion that computation does not lead to semantic understanding. Kimberly and Mark discuss the nature of learning through the eyes of a child and whether computational systems can ever be conscious. Mark describes the phenomenal experience of understanding (aka what it feels likes). And how non-computational theories of mind may influence AI development. Finally, Mark reflects on whether AI will be good for the few or the many.Professor J Mark Bishop is the Professor of Cognitive Computing (Emeritus) at Goldsmith College, University of London and Scientific Advisor to FACT360.A transcript of this episode is here. 
Chris McClean reflects on ethics vs. risk, ethically positive outcomes, the nature of trust, looking beyond ourselves, privacy at work and in the metaverse.Chris outlines the key differences between digital ethics and risk management. He emphasizes the discovery of positive outcomes as well as harms and where a data-driven approach can fall short. From there, Chris outlines a comprehensive digital ethics framework and why starting with impact is key. He then describes a pragmatic approach for making ethics accessible without sacrificing rigor.Kimberly and Chris discuss the definition of trust, the myriad reasons we might trust someone or something, and why trust isn’t set-it-and-forget-it. From your smart doorbell to self-driving cars and social services, Chris argues persuasively for looking beyond ‘how does this affect me.’ Highlighting Eunice Kyereme’s work on digital makers and takers, Chris describes the role we each play – however unwittingly – in creating the digital ecosystem. We then discuss surveillance vs. monitoring in the workplace and the potential for great good and abuse inherent in the Metaverse. Finally, Chris stresses that ethically positive outcomes go beyond ‘tech for good’ and that ethics is good business.Chris McClean is the Global Head of Digital Ethics at Avanade and a PhD candidate in Applied Ethics at the University of Leeds. A transcript of this episode is here. 
Henrik Skaug Sætra contends humans aren’t mere machines, assesses AI thru a sustainable development lens and weighs the effect of political imbalances and ESG.Henrik embraces human complexity. He advises against applying AI to naturally messy problems or to influence populations least able to resist. Henrik outlines how the UN Sustainable Development Goals (SDG) can identify beneficial and marketable avenues for AI. He also describes SDG’s usefulness in ethical impact assessment. Championing affordable and equitable access to technology, Henrik shows how disparate impacts occur between individuals, groups and society. Along the way, Kimberly and Henrik discuss political imbalances, the technocratic nature of emerging regulations and why we shouldn’t expect corporations to be broadly ethical of their own accord. Outlining his AI ESG protocol, Henrik surmises that qualitative rigor can address gaps in quantitative analysis alone. Finally, Henrik encourages the proactive use of SDGs and ESG to drive innovation and opportunity.Henrik is Head of the Digital Society and an Associate Professor at Østfold University College. He is a political theorist focusing on the political, ethical, and social implications of technology.A transcript of this episode can be found here. 
Dr. Mark Coeckelbergh is a Professor of Philosophy of Media and Technology, a member of the High-Level Expert Group on Artificial Intelligence (EC) and the Austrian Council on Robotics and AI.In this insightful discussion, Mark explains why AI systems are not merely tools or strictly rational endeavors. He describes the challenges created when AI systems imitate human capabilities and how human sciences help address the messy realities of AI. Mark also demonstrates how political philosophy makes conversations about multidimensional topics such as bias, fairness and freedom more productive. Kimberly and Mark discuss the difficulty with global governance, the role of scientific expertise and technology in society, and the need for political imagination to govern emerging technologies such as AI. Along the way, Mark illustrates the debate about how AI systems could vs. should be used through the lens of gun control and climate change. Finally, Mark sounds a cautionary note about the potential for AI to undermine our fragile democratic institutions.A transcript of this episode can be found here. 
Patrick Hall is the Principal Scientist at bnh.ai.Patrick artfully illustrates how data science has become divorced from scientific rigor. At least, that is, in popular conceptions of the practice. Kimberly and Patrick discuss the pernicious influence of the McNamara Fallacy, applying the scientific method to algorithmic development and keeping an open mind without sacrificing concept validity. Patrick addresses the recent hubbub around AI sentience, cautions against using AI in social contexts and identifies the problems AI algorithms are best suited to solve. Noting AI is no different than any other mission-critical software, he outlines the investment and oversight required for AI programs to deliver value. Patrick promotes managing AI systems like products and makes the case for why performance in the lab should not be the first priority.A transcript of this episode can be found here. 
Fernando Lucini is the Global Data Science & ML Engineering Lead (aka Chief Data Scientist) at Accenture.Fernando Lucini outlines common uses for AI generated synthetic data. He emphasizes that synthetic data is a facsimile – close, but not quite real - and debunks the notion it is inherently private. Kimberly and Fernando discuss the potential pitfalls in synthetic data sets, the emergent need for standard controls, and why ensuring quality - much less fairness - is not simple. Fernando assesses the current state of the synthetic data market and the work still to be done to enable broad-scale adoption. Tipping his hat to fabulous achievements such as GPT-3 and Dall-E, Fernando identifies multiple ways synthetic data can be used for good works and creative endeavors.A transcript of this episode can be found here. 
Roger Spitz is the CEO of Techistential and Chairman of the Disruptive Futures Institute.In this thought-provoking discussion, Roger discusses why neither humans nor AI systems are great at decision making in complex environments. But why humans should be. Roger unveils the insidious influence of AI systems on human decisions and why uncertainty is a pre-requisite for human choice, freedom, and agency. Kimberly and Roger discuss the implications of complexity, the rising cost of poor assumptions, and the dangerous allure of delegating too many decisions to AI-enabled machines. Outlining the AAA (antifragile, anticipatory, agile) model for decision-making in the face of deep uncertainty, Roger differentiates foresight from strategic planning and anticipatory agility from ‘move fast and break things.’ Last but not least, Roger argues that current educational incentives run counter to nurturing the mindset and skills needed to thrive in our increasingly complex, emergent world.A transcript of this episode can be found here. 
Dr. Dorothea Baur is an ethicist and independent consultant on the topics of ethics, responsibility and sustainability in tech and finance.Dorothea debunks common ethical misconceptions and explores the novel issues that arise when applying ethics to technology. Kimberly and Dorothea discuss the risks posed by risk management-based approaches to tech ethics. As well as the “unholy collision” between the pursuit of scale and universal generalization. Dorothea reluctantly gives a nod to Milton Friedman when linking ethics to material business outcomes. Along the way, Dorothea illustrates how stakeholder engagement is evolving and the power of the employee. Noting that algorithms do not have agency and will never be ethical, Dorothea persuasively articulates our moral responsibility to retain responsibility for our AI creations.A transcript of this episode can be found here. 
Marisa Tschopp is a Human-AI interaction researcher at scip AG and Co-Chair of the IEEE Agency and Trust in AI Systems Committee.Marisa answers the question ‘what is trust?' and compares trust between humans to trust in a machine. Differentiating trust from trustworthiness, Marisa emphasizes the importance of considering the context and motivation behind AI systems. Kimberly and Marisa discuss the pros and cons of endowing AI systems with human characteristics (aka anthropomorphizing) and why ‘do you trust AI?’ is the wrong question. Debunking the concept of ‘The AI’, Marisa outlines practices for calibrating trust in AI systems. A self-described skeptical optimist, Marisa also shares her research into how people perceive their relationships with AI-enabled machines and how these patterns may change over time.A transcript of this episode can be found here.
Dr Erica Thompson is a Senior Policy Fellow in Ethics of Modelling and Simulation at the LSE Data Science Institute.Using the trusty-ish weather forecast as a starting point, Erica highlights the gaps to be minded when applying models in real-life. Kimberly and Erica discuss the role of expert judgement and intuition, the orthodoxy of data-driven cultures, models as engines not cameras, and why exposing uncertainty improves decision-making. Erica illustrates why it is so easy to become overconfident in models. She shows how value judgements are embedded in every step of model development (and hidden in math), why chameleons and accountability don’t mix, and considerations for using model outputs to think or decide effectively. Looking forward, Erica foresees a future in which values rather than data drive decision-making.A transcript of this episode can be found here. 
Sheryl Cababa is the Chief Design Officer at Substantial where she conducts research, develops design strategies and advocates for human-centric outcomes.From the infinite scroll to Twitter edits, Sheryl illustrates how current design practices unwittingly undermine human agency. Often while delivering exactly what a user wants. She refutes the need to categorically eliminate the term ‘users’ while showing how a singular user focus has led us astray. Sheryl then outlines how systems thinking can reorient existing design practices toward human-centric outcomes. Along the way, Kimberly and Sheryl discuss the limits of empathy, the evolving ethos of unintended consequences and embracing nuance. While acknowledging the challenges ahead, Sheryl remains optimistic about our ability to design for human well-being not just expediency or profit.A transcript of this episode can be found here. Our next episode explores the limits of model land with Dr Erica Thompson. Subscribe now so you don’t miss it.
Kate O’Neill is an executive strategist, the Founder and CEO of KO Insights, and author dedicated to improving the human experience at scale.  In this paradigm-shifting discussion, Kate traces her roots from a childhood thinking heady thoughts about language and meaning to her current mission as ‘The Tech Humanist’. Following this thread, Kate illustrates why meaning is the core of what makes us human. She urges us to champion meaningful innovation and reject the notion that we are victims of a predetermined future.Challenging simplistic analysis, Kate advocates for applying multiple lenses to every situation: the individual and the collective, uses and abuses, insight and foresight, wild success and abject failure. Kimberly and Kate acknowledge but emphatically disavow current norms that reject nuanced discourse or conflate it with ‘both-side-ism’. Emphasizing that everything is connected, Kate shows how to close the gap between human-centricity and business goals. She provides a concrete example of how innovation and impact depend on identifying what is going to matter, not just what matters now. Ending on a strategically optimistic note, Kate urges us to anchor on human values and relationships, habituate to change and actively architect our best human experience – now and in the future.A transcript of this episode can be found here.Thank you for joining us for Season 2 of Pondering AI. Join us next season as we ponder the ways in which AI continues to elevate and challenge our humanity. Subscribe to Pondering AI now so you don’t miss it.
Giselle Mota is a Principal Consultant for the Future of Work at ADP where she advices organizations on human agency, diversity and learning in the age of AI.  In this energetic discussion, Giselle shares how navigating dyslexia spawned a passion for technology and enabling learning at work. Giselle stresses that human agency and automation are only mutually exclusive when AI is employed with the wrong end in mind. Prioritizing human experience over ‘doing more with less’ Giselle explores the impact – good and bad - of AI systems on humans at work today.While ruminating on the future happening now, Giselle puts the onus on organizations to ensure no employee is left behind. From the warehouse floor to HR, the importance of diverse perspectives, rigorous due diligence and critical thinking when deploying AI systems is underscored. Along the way, Kimberly and Giselle dissect what AI algorithms can and cannot reasonably predict. Giselle then defines the leadership mindsets and talent needed to bring AI to work appropriately. With infectious optimism, she imposes a reality check on our innate desire to “just do cool things”. Finally, in a rousing call to action, Giselle makes a robust argument for robust accountability and making ethics endemic to every human endeavor, including AI.A transcript of this episode can be found here.Our final episode of Season 2 features Kate O’Neill. A tech humanist and author of ‘A Future so Bright’ Kate will discuss how we can architect the future of AI with strategic optimism. Subscribe to Pondering AI now so you don’t miss it.  
Baroness Beeban Kidron is an award-willing filmmaker, a Crossbench Peer in the UK House of Lords and the Founder and Chair of the 5Rights Foundation.In this eye-opening discussion, Beeban vividly describes how the seed for 5Rights was planted while getting up close and personal with teenagers navigating the physical and digital realms ‘In Real Life’. Beeban sounds a resounding alarm about why treating all humans as equal on the internet is regressive. As well as how existing business models have created a perfect societal storm, especially for children.Intertwining the voices of these underserved and underrepresented stakeholders with some shocking facts, Beeban illustrates the true impact of the current digital experiment on young people. In that vein, Kimberly and Beeban examine behaviors we implicitly condone and, in fact, promote in the digital realm that would never pass muster in so-called real life. Speaking to the brilliantly terrifying Twisted Toys campaign, Beeban shows how storytelling can make these critical yet oft sensitive topics accessible. Finally, Beeban speaks about critical breakthroughs such as the Age-Appropriate Design Code, positive action being taken by digital platforms in response and the long road still ahead.A transcript of this episode can be found here.Our next episode features Giselle Mota. Giselle is a Principle Consultant for the Future of Work at ADP where she advices organizations on human agency, diversity and learning in the age of AI. Subscribe to Pondering AI now so you don’t miss it.
Vincent de Montalivet is the Global AI Sustainability Leader at Capgemini where he develops strategies to use AI to combat climate change and drive corporate net-zero initiatives.In this forthright discussion, Vincent charts his path from supply chain engineering to his current position at the crossroads of data, IT and sustainability. Vincent stresses this is the ‘decade of action’ and  highlights cutting edge AI applications enabling the turn from simulation to accountability in real-time. Addressing fears about AI, Vincent shows how it enables rather than replaces human expertise.In that vein, Kimberly and Vincent have a frank discussion about whether AI for environmental good balances AI’s own appetite for energy. Vincent examines different aspects of the argument and shares recent research, facts and figures to shed light on the debate. He describes why AI is not a silver bullet, why AI is not always required and emerging research into making AI itself green. Vincent then provides a 3-step roadmap for corporate sustainability initiatives. Discussing emerging innovations, Vincent pragmatically points out that we are only addressing 3% of the green use cases that can be addressed with AI today. He rightfully suggests focusing there.A transcript of this episode can be found here.Our next episode features Baroness Beeban Kidron. She is the Founder and Chair of the 5Rights Foundation which is leading the fight to protect children’s rights and well-being in the digital realm. Subscribe to Pondering AI now so you don’t miss it. 
David Ryan Polgar is the Founder of All Tech is Human. He is a leading tech ethicist, an advocate for human-centric technology, and advisor on improving social media and crafting a better digital future. In this timely discussion, David traces his not-so-unlikely path from practicing law to being a standard bearer for the responsible technology movement. He artfully illustrates the many ways technology is altering the human experience and makes the case for “no application without representation”.   Arguing that many of AI’s misguided foibles stem from a lack of imagination, David shows how all paths to responsible AI start with diversity. Kimberly and David debunk the myth of the ethical superhero but agree there may be a need for ethical unicorns. David expounds on the need for expansive education, why non-traditional career paths will become traditional and the benefits of thinking differently. Acknowledging the complex, nuanced problems ahead, David advocates for space to air constructive, critical, and, yes, contrarian points of view. While disavowing 80s sitcoms, David celebrates youth intuition, bemoans the blame game, prioritizes progress over problem statements, and leans into our inevitable mistakes. Finally, David invokes a future in which responsible tech is so in vogue it becomes altogether unremarkable. A transcript of this episode can be found here. Our next episode features Vincent de Montalivet, leader of Capgemini’s global AI Sustainability program. Vincent will help us explore the yin and yang of AI’s relationship with the environment. Subscribe now to Pondering AI so you don’t miss it.  
Dr. Valérie Morignat PhD is the CEO of Intelligent Story and a leading advisor on the creative economy. She is a true polymath working at the intersection of art, culture, and technology.In this perceptive discussion, Valérie illustrates how cultural legacies inform technology and innovation today. Tracing a path from storytelling in caves to modern Sci-Fi she proves that everything new takes (a lot of) time. Far from theoretical, Valérie shows how this philosophical understanding helps business innovators navigate the current AI landscape.Discussing the evolution of VR/AR, Valérie highlights the existential quandary created by our increasingly fragmented digital identities. Kimberly and Valérie discuss the pillars of responsible innovation and the amplification challenges AI creates. Valérie shares the power of AI to teach us about ourselves and increase human learning, creativity, and autonomy. Assuming, of course, we don’t encode ancient, spurious classification schemes or aggravate negative behaviors. She also describes our quest for authenticity and flipping the script to search for the real in the virtual.Finally, Valérie sketches a roadmap for success including executive education and incremental adoption to create trust and change our embedded mental models.A transcript of this episode can be found here.Our next episode features David Ryan Polgar, founder of All Tech is Human. David is a leading tech ethicist and responsible technology advocate who is well-known for his work on improving social media.  Subscribe now so you don’t miss it. 
Yonah Welker is a technology innovator, influencer, and advocate for diversity and zero exclusion in AI. They are at the forefront of policies and applications for adaptive, assistive, and social AI.  In this illuminating discussion, Yonah traces their personal journey from isolation to advocacy through technology. They are passionate about the future of AI-enabled education, healthcare, and civics. Yet caution that our current approach to inclusion is not, in fact, inclusive. While evaluating mechanisms for accountability, Yonah shares lessons learned from the European Commission’s diverse approach to technology evaluation.   Yonah has an expansive view of how AI can “change everything” for those who experience life differently – whether they are autistic, neurodiverse, disabled or dyslexic. Kimberly and Yonah discuss how AI is expanding the borders of the classroom and workplace today. And how these solutions can inadvertently reinforce existing barriers if not mindfully applied. This leads naturally to the need for broad community collaboration and human involvement beyond traditional corporate boundaries. Yonah highlights our responsibilities as digital citizens and the critical debate over digital ownership. Finally, Yonah emphasizes that we are all, at our core, activists who can influence the trajectory of AI.  A transcript of this episode can be found here. Our next episode features Dr. Valérie Morignat PhD. Valerie is the CEO of Intelligent Story and a leading advisor on the creative economy who works at the intersection of art and AI. Subscribe now so you don’t miss it. 
Dr. Eric Perakslis, PhD is the Chief Science and Digital Officer at the Duke Clinical Research Institute.  In this incisive discussion, Eric exposes the curious nature of healthcare data. He proposes treating data like a digital specimen: one that requires clear consent and protection against misuse. Expanding our view beyond the doctor’s office, Eric shows why adverse effects from data misuse can be much harder to cure than a rash. As well as our innate human tendency to focus on technology’s potential while overlooking patient vulnerabilities. While discussing current data protections, Eric lays the foundation for a shift from privacy toward non-discrimination. Along the way, Kimberly and Eric discuss the many ways anonymous data can compromise patient privacy and the research it underpins. In doing so, a critical loophole in existing institutional review boards (IRB) and regulatory safeguards is exposed. An avid data advocate, Eric adroitly argues that proper patient and data protection will accelerate innovation and life-saving research. Finally, Eric makes a case for doing the hard things first and why the greatest research opportunities are rooted in equity.  A transcript of this episode can be found here. Our next episode features Yonah Welker. They are a ‘tech explorer’ and leading voice regarding the need for diversity and zero exclusion in AI as well as the role of social AI. Subscribe now so you don’t miss it.  
loading
Comments 
Download from Google Play
Download from App Store