Discover
Artificial Intelligence and You
294 Episodes
Reverse
This and all episodes at: https://aiandyou.net/ .
The fear, uncertainty, and doubt around the future of work has reached epidemic proportions. We'll attempt to shed some light and provide some relief with data, questions, and informed speculation on the topic. How is the generation of content going to evolve, and what's awaiting us in robotics? Who will make the decisions about the future workplace, and what will happen to creatives in marketing and other fields?
All this plus our usual look at today's AI headlines!
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
I am talking with José Antonio Bowen and C. Edward Watson about AI in postsecondary education, because they are authors of the new book Teaching with AI: A Practical Guide to a New Era of Human Learning. José is leader of the Bowen Innovation Group, consulting on innovation in higher education and was the 11th president of Goucher College. He has held leadership roles at Stanford, the University of Southampton, Georgetown, Miami University, and Southern Methodist University, and his book Teaching Naked reshaped conversations about technology and pedagogy. He is an international jazz pianist and edited the Cambridge Companion to Conducting.
Eddie Watson is Vice President for Digital Innovation at the American Association of Colleges and Universities and is the Founding Director of their Institute on AI, Pedagogy, and the Curriculum. He directed the Center for Teaching and Learning at the University of Georgia, and is a Fellow of the Louise McBee Institute of Higher Education.
In our conclusion, we talk about the future of textbooks, José and Eddie’s meta-analysis of AI literacy frameworks and standardizing AI literacy training, the evolution of teaching models and practices like lectures, and the future of degrees themselves.
All this plus our usual look at today's AI headlines!
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
After last week’s exploration of AI in secondary education it’s time to look at how it’s landing in the universities, and so I am talking with José Antonio Bowen and C. Edward Watson, authors of the brand new book Teaching with AI: A Practical Guide to a New Era of Human Learning. José leads the Bowen Innovation Group, consulting on innovation in higher education and was the 11th president of Goucher College. He has held leadership roles at Stanford, the University of Southampton, Georgetown, Miami University, and Southern Methodist University, and his influential book Teaching Naked reshaped conversations about technology and pedagogy. He edited the Cambridge Companion to Conducting, and is an international jazz pianist.
C. Edward Watson - Eddie on our show - is Vice President for Digital Innovation at the American Association of Colleges and Universities and is the Founding Director of their Institute on AI, Pedagogy, and the Curriculum. He directed the Center for Teaching and Learning at the University of Georgia, and is a Fellow of the Louise McBee Institute of Higher Education.
We talk about how students and teachers are reacting to AI, threats to jobs – particularly teaching jobs – and changes to how we work, what really matters in the practice of teaching in an AI world, cheating, changes to relationships between teachers and students and the importance of caring.
All this plus our usual look at today's AI headlines!
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
What’s going on with getting AI education into America’s classrooms? We're finding out from Jeff Riley, former Commissioner of Education for the state of Massachusetts and founder of a new organization – Day of AI, started by MIT’s Responsible AI for Social Empowerment and Education institute. And they are mounting a campaign called Responsible AI for America’s Youth, which is now running across all 50 states and will run an event called America’s Youth AI Festival in July 2026 in Boston.
Jeff has got master’s degrees from Johns Hopkins and Harvard. He was a Boston school principal and as commissioner, successfully navigated Massachusetts schools through Covid and other crises.
In our conclusion, we talk about how Jeff sees the AI education of teachers evolving, responsible use of AI by students, differentiated learning, making AI in classrooms safe for teachers and students, the impact of AI on educational inequalities, the future of educational reform, and how you can get involved in AI in schools.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
What’s going on with getting AI education into America’s classrooms? We’re going to find out from Jeff Riley, former Commissioner of Education for the state of Massachusetts and founder of a new organization – Day of AI, started by MIT’s Responsible AI for Social Empowerment and Education institute. And they are mounting a campaign called Responsible AI for America’s Youth, which is now running across all 50 states and will run an event called America’s Youth AI Festival in July 2026 in Boston.
Jeff has got master’s degrees from Johns Hopkins and Harvard. He was a Boston school principal and as commissioner, successfully navigated Massachusetts schools through Covid and other crises.
We talk about what the campaign is doing and how teachers are responding to it, risks of AI and social media to kids, what to do about cheating and AI detectors, and much more!
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
It's that time to visit the - ghosts? - of AI past, present, and future, in our traditional retrospective/predictions episode. Forming the panel are Dan Turchin, CEO of PeopleReign, the AI platform automating HR and host of the “AI and the Future of Work” podcast. And Richard Foster-Fletcher, the Founder and Executive Chair of MKAI, the inclusive Artificial Intelligence Community, and leader of the MKAI Centre for Digital Trust and host of the Boundless podcast. Both have been on the show before, as guest experts and year-end panelists, and both are good friends.
We'll talk about surprises from 2025, ways companies have used AI productively and ways it has been oversold, whether an AI bubble may pop next year, what'll happen next with AI slop, and how much AI may advance human progress next year and the emergence of AI nation states.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
What is consciousness? That’s a profound question that many would say is unanswerable. And how could you make artificial consciousness? That’s a more profound question that many would say is impossible. We are talking with Suzanne Gildert, founder of Nirvanic, a Quantum-AI research company, who is not just talking about consciousness, she’s doing something about it. She is a prolific inventor with more than 60 US patents in quantum computing, humanoid robotics and artificial intelligence. Suzanne has a PhD in experimental quantum physics and was the founder of two robot companies, Kindred AI, and Sanctuary AI. At Nirvanic she seeks to understand consciousness and innovate conscious AI using quantum computing.
In the conclusion of our interview, we talk about how robots will use quantum computing, world models and what robots really can and can’t do right now, form factors for robots, and the connection between robot consciousness and finding our purpose in the world.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
What is consciousness? That’s a profound question that many would say is unanswerable. And how could you make artificial consciousness? That’s a more profound question that many would say is impossible. Tackling both of those head-on is Suzanne Gildert, founder of Nirvanic, a Quantum-AI research company. She’s not just talking about consciousness, she’s doing something about it. She is a prolific inventor with more than 60 US patents in quantum computing, humanoid robotics and artificial intelligence. Suzanne has a PhD in experimental quantum physics – I mean, how cool is that - and was the founder of two robot companies, Kindred AI, and Sanctuary AI. At Nirvanic she seeks to understand consciousness and innovate conscious AI using quantum computing.
We talk about quantum computing and consciousness, the nature of reality and its connection to quantum physics, the Observer Effect and Schrödinger’s Box, panpsychism, the state of the art of quantum computing, quantum supremacy, the present and future of general purpose robotics, and the connection between reward functions and the consciousness of the universe.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
What if artificial superintelligence - ASI - could be made both more safe and more profitable? I'm talking with Craig Kaplan, who has the website superintelligence.com, about his concept of "democratic AI." Craig is CEO and founder of iQ Company, focused on AGI and ASI. He also founded and ran PredictWallStreet, a financial services firm which used AI to power a top hedge fund.
Craig is a former visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon.
In part 2, we talk about rights of AIs, safe superintelligence, where AI gets its values, and how model vendors might be incentivized to put their products into the collective AI intelligence.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
What if artificial superintelligence - ASI - could be made both more safe and more profitable? Returning to the show after a year is Craig Kaplan, talking about how "democratic AI" can do that. Craig, who has the website superintelligence.com, is CEO and founder of iQ Company, focused on AGI and ASI. He also founded and ran PredictWallStreet, a financial services firm which used AI to power a top hedge fund.
Craig is a former visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon.
We talk about democratic AI, a kind of a hive mind of AIs that combine to work together safely, and how do they talk to each other, what are they made up of, and we’ll also talk about systems for solving ethical problems.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
How should AI change democracy? That’s the topic of Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, and I am continuing my talk with its authors. Bruce Schneier is an internationally renowned security technologist and the bestselling author of fourteen books, including Data and Goliath and A Hacker’s Mind. Nathan Sanders is a data scientist who has served in fellowships and the Massachusetts legislature and the Berkman-Klein Center at Harvard. He writes in The New York Times and The Atlantic.
We talk about whether wealthy entities might subvert the use of AI in democracy, how smaller countries are engaging with AI in government, the utility of open weight and open source models, digital twins in government, the future of surveillance, and what makes Bruce and Nathan optimistic about the future.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
How should AI change democracy? That’s the topic of Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, and I am talking today with its authors. Bruce Schneier is an internationally renowned security technologist and the bestselling author of fourteen books, including Data and Goliath and A Hacker’s Mind. He is a lecturer at the Harvard Kennedy School, and a board member of the Electronic Frontier Foundation, and Chief of Security Architecture at Inrupt. Nathan Sanders is a data scientist researching machine learning, astrophysics, public health, environmental justice, and more. He has served in fellowships and the Massachusetts legislature and the Berkman-Klein Center at Harvard. He writes on AI and democracy in The New York Times and The Atlantic.
We talk about this fascinating and scary intersection of AI and government, of AI being used in making legislation, the concept of democracy as an information system, ways AI can transform how citizens engage their governments, regulatory responses to AI from the US and around the world, and how the judicial branch can use AI.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
What's really going on in classrooms with AI right now? I'm talking with Gerry White, a teacher, technologist, writer, and lifelong learner who has spent two decades at the forefront of education and technology integration. Gerry is the Dean of Academic Technology at ECPI University in Virginia and the founder of MyTutorPlus, an AI-powered tutoring platform designed to personalize education for learners of all ages. From building over 70 apps to creating immersive AR and VR experiences, his work bridges the gap between the humanities and technology. His Substack articles and books unpack the ethical, emotional, and societal consequences of AI.
We talk about how cultural bias in GenAI affects the classroom, what school leadership should be doing, AI in group work, assessment, how AI might accelerate learning and augment the human experience, interactions with parents, and kids’ social uses of AI, all in the context of real experiences in school.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
We're continuing to focus on AI in education because it's so pivotal to the future of the human race. What's really going on in classrooms with AI right now? We are learning that from Gerry White, a teacher, technologist, writer, and lifelong learner who has spent two decades at the forefront of education and technology integration. Gerry is the Dean of Academic Technology at ECPI University in Virginia and the founder of MyTutorPlus, an AI-powered tutoring platform designed to personalize education for learners of all ages. From building over 70 apps to creating immersive AR and VR experiences, his work bridges the gap between the humanities and technology. His Substack articles and books unpack the ethical, emotional, and societal consequences of AI.
We’re going to talk about how generative AI first showed up in Gerry’s classrooms, the importance of preserving students’ voices, confronting the cheating and plagiarism problems, optimal ways of engaging students’ use of AI, and finding the unique value of humans in the workplace, all in the context of real experiences in real classes.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
Students using AI to cheat on homework - or being inaccurately flagged as cheating - falls under the heading of 'academic integrity,' so I am talking with Alyson King, Professor in Political Science at Ontario Tech University in Canada, and editor of the new book, “Artificial Intelligence, Pedagogy and Academic Integrity,” containing 12 contributors’ thoughts and research on the problem of maintaining academic integrity in a world where AI can complete virtually any school assignment at a passing grade or higher.
Alyson earned her PhD in the History of Education at the University of Toronto and currently she engages in research intended to better understand student experiences and academic integrity. In her teaching, she includes topics related to Indigenous experiences and worldviews, such as Residential Schools, and has designed a course about the politics of Indigenous Rights.
We’re going to talk about teachers getting to know their students’ voices, AI detectors, and the place of AI in education.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
Students using AI to cheat on homework - or being inaccurately flagged as cheating - falls under the heading of 'academic integrity,' so I am talking with Alyson King, Professor in Political Science at Ontario Tech University in Canada, and editor of the new book, “Artificial Intelligence, Pedagogy and Academic Integrity,” containing 12 contributors’ thoughts and research on the problem of maintaining academic integrity in a world where AI can complete virtually any school assignment at a passing grade or higher.
Alyson earned her PhD in the History of Education at the University of Toronto and currently she engages in research intended to better understand student experiences and academic integrity. In her teaching, she includes topics related to Indigenous experiences and worldviews, such as Residential Schools, and has designed a course about the politics of Indigenous Rights.
We’re going to talk about plagiarism, AI-proofing assignments, motivating students, threats to critical thinking, and much more.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
We are again focusing on AI in education, because that is really where the rubber meets the road for nearly every issue in AI and where we need to get it right, because that’s where we’re training the generation that will save the world. You could be very pessimistic about that, but you can also be very optimistic about that, and one person who is optimistic is Becky Keene, an educator, author, and speaker focused on innovative teaching and learning, and author of the new book, AI Optimism, about all the good possibilities of AI in education. She specializes in instructional coaching, game-based learning, and integrating AI into education to empower students as creators.
We talk about the conflict between fear and hope about AI in education, changing our focus from product to process, how to reshape education to leverage AI, what role school leadership should play, and much more.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
As we use AI more and more as a critical assistant, what might that be doing to our critical thinking? Professor Michael Gerlich has published his research in the paper “AI Tools In Society: Impacts On Cognitive Offloading And The Future Of Critical Thinking” in the journal Societies. He showed that younger participants “exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants.” That’s the sort of result that demands we pay attention at a time when AI is being increasingly used by schools and students.
Michael is the Head of Center for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School. His research and publications largely focus on the societal impact of Artificial Intelligence, which has made him in demand as a speaker around the world. He’s also taught at the London School of Economics and Political Science, Cambridge, and other institutions. He’s also been an adviser to the President and the Prime Minister of Kyrgyzstan, the Uzbekistan Cabinet, and Ministers of economic affairs in Azerbaijan.
In part 2, we talk about whether or how we can tell that our cognition has been impaired, how the future of work will change with cognitive offloading and what employers need to beware of and leverage.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
As we use AI more and more as a critical assistant, what might that be doing to our critical thinking? Professor Michael Gerlich has published his research in the paper “AI Tools In Society: Impacts On Cognitive Offloading And The Future Of Critical Thinking” in the journal Societies. He showed that younger participants “exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants.” That’s the sort of result that demands we pay attention at a time when AI is being increasingly used by schools and students.
Michael is the Head of Center for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School. His research and publications largely focus on the societal impact of Artificial Intelligence, which has made him in demand as a speaker around the world. He’s also taught at the London School of Economics and Political Science, Cambridge, and other institutions. He’s also been an adviser to the President and the Prime Minister of Kyrgyzstan, the Uzbekistan Cabinet, and Ministers of economic affairs in Azerbaijan.
We talk about “cognitive offloading” and the use of GenAI. Why is it different from using calculators, which were widely forecast to cause math skills to atrophy and were banned from schools, and we since learned better. Michael will look at how AI like the big agents that might come with workplace IT systems help or hinder in knowledge work, and consequences for on-the-job training.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ .
"The book seems to be more timely than originally anticipated." I'm talking with Carl Benedikt Frey about his new book, How Progress Ends: Technology, Innovation, and the Fate of Nations, and its exploration of the political and economic effects of policies like tariffs and university defunding comes at a very critical time. AI is projected to have enormous economic and social impacts that call for the biggest of big picture thinking, and Frey is the co-author of the 2013 study The Future of Employment: How Susceptible Are Jobs to Computerization, which has received over 12,000 citations.
He is Associate Professor of AI and Work at the Oxford Internet Institute and Director and Founder of the Future of Work Programme at the Oxford Martin School, both at the University of Oxford. His 2019 book, The Technology Trap: Capital, Labor, and Power in the Age of Automation, was selected as a Financial Times Best Book of the Year and awarded Princeton University’s Richard A. Lester Prize.
In the conclusion, we talk about the links between innovation and industry productivity, why AI hasn’t yet delivered broad gains, automation’s uneven effects on workers, the role of antitrust in sustaining competition, and the need for institutions like Oxford to adapt.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.



