DiscoverPondering AI
Pondering AI
Claim Ownership

Pondering AI

Author: Kimberly Nevala, Strategic Advisor - SAS

Subscribed: 8Played: 38
Share

Description

How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
49 Episodes
Reverse
Sarah Gibbons and Kate Moran riff on the experience of using current AI tools, how AI systems may change our behavior and the application of AI to human-centered design.   Sarah and Kate share their non-linear paths to becoming leading user experience (UX) designers. Defining the human-centric mindset Sarah stresses that intent is design and we are all designers. Kate and Sarah then challenge teams to resist short-term problem hunting for AI alone. This leads to an energized and frank debate about the tensions created by broad availability of AI tools with “shitty” user interfaces, why conversational interfaces aren’t the be-all-end-all and whether calls for more discernment and critical thinking are reasonable or even new. Kate and Sara then discuss their research into our nascent AI mental models and emergent impacts on user behavior. Kate discusses how AI can be used for UX design along with some far-fetched claims. Finally, both Kate and Sara share exciting areas of ongoing research.  Sarah Gibbons and Kate Moran are Vice Presidents at Nielson Norman Group where they lead strategy, research, and design in the areas of human-centered design and user experience (UX). A transcript of this episode is here. 
Simon Johnson takes on techno-optimism, the link between technology and human well-being, the law of intended consequences, the modern union remit and political will.In this sobering tour through time, Simon proves that widespread human flourishing is not intrinsic to tech innovation. He challenges the ‘productivity bandwagon’ (an economic maxim so pervasive it did not have a name) and shows that productivity and market polarization often go hand-in-hand. Simon also views big tech’s persuasive powers through the lens of OpenAI’s board debacle.Kimberly and Simon discuss the heyday of shared worker value, the commercial logic of automation and augmenting human work with technology. Simon highlights stakeholder capitalism’s current view of labor as a cost rather than people as a resource. He underscores the need for active attention to task creation, strong labor movements and participatory political action (shouting and all). Simon believes that shared prosperity is possible. Make no mistake, however, achieving it requires wisdom and hard work.Simon Johnson is the Head of the Economics and Management group at MIT’s Sloan School of Management. Simon co-authored the stellar book “Power and Progress: Our 1,000 Year Struggle Over Technology and Prosperity with Daren Acemoglu.A transcript of this episode is here.
Professor Rose Luckin provides an engaging tutorial on the opportunities, risks, and challenges of AI in education and why AI raises the bar for human learning.      Acknowledging AI’s real and present risks, Rose is optimistic about the power of AI to transform education and meet the needs of diverse student populations. From adaptive learning platforms to assistive tools, Rose highlights opportunities for AI to make us smarter, supercharge learner-educator engagement and level the educational playing field. Along the way, she confronts overconfidence in AI, the temptation to offload challenging cognitive workloads and the risk of constraining a learner’s choices prematurely. Rose also adroitly addresses conflicting visions of human quantification as the holy grail and the seeds of our demise. She asserts that AI ups the ante on education: how else can we deploy AI wisely? Rising to the challenge requires the hard work of tailoring strategies for specific learning communities and broad education about AI itself. Rose Luckin is a Professor of Learner Centered Design at the UCL Knowledge Lab and Founder of EDUCATE Ventures Research Ltd., a London hub for educational technology start-ups, researchers and educators involved in evidence-based educational technology and leveraging data and AI for educational benefit. Explore Rose’s 2018 book Machine Learning and Human Intelligence (free after creating account) and the EDUCATE Ventures newsletter The Skinny. A transcript of this episode is here. 
Katrina Ingram addresses AI power dynamics, regulatory floors and ethical ceilings, inevitability narratives, self-limiting predictions, and public AI education.   Katrina traces her career from communications to her current pursuits in applied AI ethics. Showcasing her way with words, Katrina dissects popular AI narratives. While contemplating AI FOMO, she cautions against an engineering mentality and champions the power to say ‘no.’ Katrina contrasts buying groceries with AI solutions and describes regulations as the floor and ethics as the ceiling for responsible AI. Katrina then considers the sublimation of AI ethics into AI safety and risk management, whether Sci-Fi has led us astray and who decides what. We also discuss the law of diminishing returns, the inevitability narrative around AI, and how predictions based on the past can narrow future possibilities. Katrina commiserates with consumers but cautions against throwing privacy to the wind. Finally, she highlights the gap in funding for public education and literacy.  Katrina Ingram is the Founder & CEO Ethically Aligned AI, a Canadian consultancy enabling organizations to practically apply ethics in their AI pursuits. A transcript of this episode is here. 
Paulo Carvão discusses AI’s impact on the public interest, emerging regulatory schemes, progress over perfection, and education as the lynchpin for ethical tech.           In this thoughtful discussion, Paulo outlines the cultural, ideological and business factors underpinning the current data economy. An economy in which the manipulation of personal data into private corporate assets is foundational. Opting for optimism over cynicism, Paul advocates for a first principles approach to ethical development of AI and emerging tech. He argues that regulation creates a positive tension that enables innovation. Paulo examines the emerging regulatory regimes of the EU, the US and China. Preferencing progress over perfection, he describes why regulating technology for technology’s sake is fraught. Acknowledging the challenge facing existing school systems, Paulo articulates the foundational elements required of a ‘bilingual’ education to enable future generations to “do the right things.”  Paulo Carvão is a Senior Fellow at the Harvard Advanced Leadership Initiative, a global tech executive and investor. Follow his writings and subscribe to his newsletter on the Tech and Democracy substack.  A transcript of this episode is here. 
Dr. Christina Jayne Colclough reflects on AI Regulations at Work.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Giselle Mota reflects on Inclusion at Work in the age of AI.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Ganes Kesari reflects on generative AI (GAI) in the Enterprise.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Chris McClean reflects on Digital Ethics and Regulation in AI today.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Dr. Erica Thompson reflects on Making Model Decisions about and with AI.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.To learn more, check out Erica’s book Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It
Roger Spitz reflects on Upskilling Human Decision Making in the age of AI.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.To learn more, check out Roger’s book series The Definitive Guide to Thriving on Disruption
Sheryl Cababa reflects on Systems Thinking in AI design.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next. To learn more, check out Sheryl’s book Closing the Loop: Systems Thinking for Designers
Ilke Demir reflects on Generative AI (GAI) Detection and Protection.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Professor J Mark Bishop reflects on large language models (LLM) and beyond.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.  
Henrik Skaug Sætra reflects on Environmental and Social Sustainability with AI.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next. To learn more, check out Henrik’s latest book: Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism
Yonah Welker reflects on Policymaking, Inclusion and Accessibility in AI today.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.  
Marisa Tschopp reflects on Human-AI interactions in AI. In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.  
Patrick Hall drops in to provide a current take on risk, reward and regulation in AI today.In this bonus episode, Patrick reflects on the evolving state of play in AI regulations, consumer awareness and education.  
Ganes Kesari confronts AI hype and calls for balance, reskilling, data literacy, decision intelligence and data storytelling to adopt AI productively.                     Ganes reveals the reality of AI and analytics adoption in the enterprise today. Highlighting extreme divides in understanding and expectations, Ganes provides a grounded point of view on delivering sustained business value. Cautioning against a technocentric approach, Ganes discusses the role of data literacy and data translators in enabling AI adoption. Discussing common barriers to change, Kimberly and Ganes discuss growing resistance from technologists, not just end users. Ganes muses about the impact of AI on creative tasks and his own experiences with generative AI. Ganes also underscores the need to address workforce reskilling yet remains optimistic about the future of human endeavor. While discussing the need for improved decision-making, Ganes identifies decision intelligence as a critical new business competency. Finally, Ganes strongly advocates for taking a business-first approach and using data storytelling as part of the responsible AI and analytics toolkit.  Ganes Kesari is the co-founder and Chief Decision Scientist for Gramener and Innovation Titan. A transcript of this episode is here. 
Dr. Christina Colclough addresses tech determinism, the value of human labor, managerial fuzz, collective will, digital rights, and participatory AI deployment.Christina traces the path of digital transformation and the self-sustaining narrative of tech determinism. As well as how the perceptions of the public, the C-Suite and workers (aka wage earners) diverge. Thereby highlighting the urgent need for robust public dialogue, education and collective action.Championing constructive debate, Christina decries ‘for-it-or-against-it’ views on AI and embraces the Luddite label. Kimberly and Christina discuss the value of human work, we vs. they work cultures, the divisiveness of digital platforms, and sustainable governance. Christina questions why emerging AI regulations give workers short shift and whether regulation is being privatized. She underscores the dangers of stupid algorithms and the quantification of humans. But notes that knowledge is key to tapping into AI’s benefits while avoiding harm. Christina ends with a persuasive call for responsible regulation, radical transparency and widespread communication to combat collective ignorance.Dr. Christina Jayne Colclough is the founder of The Why Not Lab where she fiercely advocates for worker rights and dignity for all in the digital age.A transcript of this episode is here. 
loading
Comments 
Download from Google Play
Download from App Store