DiscoverAI & The Future of Humanity: Artificial Intelligence, Technology, VR, Algorithm, Automation, ChatBPT, Robotics, Augmented Reality, Big Data, IoT, Social Media, CGI, Generative-AI, Innovation, Nanotechnology, Science, Quantum Computing: The Creative Process Interviews
AI & The Future of Humanity:  Artificial Intelligence, Technology, VR, Algorithm, Automation, ChatBPT, Robotics, Augmented Reality, Big Data, IoT, Social Media, CGI, Generative-AI, Innovation, Nanotechnology, Science, Quantum Computing: The Creative Process Interviews
Claim Ownership

AI & The Future of Humanity: Artificial Intelligence, Technology, VR, Algorithm, Automation, ChatBPT, Robotics, Augmented Reality, Big Data, IoT, Social Media, CGI, Generative-AI, Innovation, Nanotechnology, Science, Quantum Computing: The Creative Process Interviews

Author: The Creative Process Original Series: Artificial Intelligence, Technology, Innovation, Engineering, Robotics & Internet of Things

Subscribed: 7Played: 29
Share

Description

What are the dangers, risks, and opportunities of AI? What role can we play in designing the future we want to live in? With the rise of automation, what is the future of work? We talk to experts about the roles government, organizations, and individuals can play to make sure powerful technologies truly make the world a better place–for everyone.


Conversations with futurists, philosophers, AI experts, scientists, humanists, activists, technologists, policymakers, engineers, science fiction authors, lawyers, designers, artists, among others.


The interviews are hosted by founder and creative educator Mia Funk with the participation of students, universities, and collaborators from around the world.


67 Episodes
Reverse
How is being an artist different than a machine that is programmed to perform a set of actions? How can we stop thinking about artworks as objects, and start thinking about them as triggers for experiences? In this conversation with Max Cooper, we discuss the beauty and chaos of nature and the exploration of technology music and consciousness.Max Cooper is a musician with a PhD in computational biology. He integrates electronic music with immersive video projections inspired by scientific exploration. His latest project, Seme, commissioned by the Salzburg Easter Festival, merges Italian musical heritage with contemporary techniques, was also performed at the Barbican in London. He supplied music for a video narrated by Greta Thunberg and Pope Francis for COP26.In 2016, Cooper founded Mesh, a platform to explore the intersection of music, science and art. His Observatory art-house installation is on display at Kings Cross until May 1st.“As technology becomes more dominant, the arts become ever more important for us to stay in touch the things that the sciences can't tackle. What it's actually like to be a person? What's actually important? We can have this endless progress inside this capitalist machine for greater wealth and longer life and more happiness, according to some metric. Or we can try and quantify society and push it forward. Ultimately, we all have to decide what's important to us as humans, and we need the arts to help with that. So, I think what's important really is just exposing ourselves to as many different ideas as we can, being open-minded, and trying to learn about all facets of life so that we can understand each other as well. And the arts is an essential part of that.”https://maxcooper.nethttps://osterfestspiele.at/en/programme/2024/electro-2024https://meshmeshmesh.netwww.kingscross.co.uk/event/the-observatoryThe music featured on this episode was Palestrina Sicut, Cardano Circles, Fibonacci Sequence, Scarlatti K141. Music is from Seme and is courtesy of Max Cooper.www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
“The wealth of learning that can come from our collective awareness that essentially AI is a fancy-sounding way of saying computers can learn from the collective wisdom that exists throughout the Internet. And if we can empower the local stewards of biodiversity, local landowners, farmers indigenous populations with all of that wealth of information in a smart way, it can be incredibly empowering to many rural communities. AI might also open up an opportunity for us to rethink what life is about.”Although they comprise less than 5% of the world population, Indigenous peoples protect 80% of the Earth’s biodiversity. How can we support farmers, reverse biodiversity loss, and restore our ecosystems?Thomas Crowther is an ecologist studying the connections between biodiversity and climate change. He is a professor in the Department of Environmental Systems Science at ETH Zurich, chair of the advisory council for the United Nations Decade on Ecosystem Restoration, and founder of Restor, an online platform for the global restoration movement, which was a finalist for the Royal Foundation’s Earthshot Prize. In 2021, the World Economic Forum named him a Young Global Leader for his work on the protection and restoration of biodiversity. Crowther’s post-doctoral research transformed the understanding of the world’s tree cover, and the study also inspired the World Economic Forum to announce its Trillion Trees initiative, which aims to conserve and restore one trillion trees globally within the decade.https://crowtherlab.com/about-tom-crowther https://restor.eco/?lat=26&lng=14.23&zoom=3www.creativeprocess.info www.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
"We are at a crossroads, a paradigm shift with the emergence of artificial intelligence which is going to transform our planet and mankind that is not even anticipated by the people who created AI technology. There are some signs that AI appears to be sentient, and soon it will surpass human brain and mind capacity. So, if you want, we are the creators of a new species. AI is based on silicon and not carbon like we humans are. This is a very interesting aspect. It is a new life form. And you can look at the definition of what it means to have consciousness or something similar. We are very fragile as a species. Could it be that the silicon-based life form is actually something more advanced than the biological carbon-based life form? Could it be that we are at the point where we are creating a life form that may be - by blending biological with this cybernetic entity that we're creating now - creating a post-human? Almost a new form of life that blends biological with machines and silicon technologies and gives us two things? One infinite intelligence that will be exponentially much more powerful in terms of our capacity to communicate, interact, and access information that will give us immortality? You will, just like I take my car to the garage and change parts when they break down, and I can drive this car for unlimited time, as long as I keep changing the parts and servicing it.The same could be a life form that is not entirely based on carbon, but is some kind of blended machine, biological, post-human type of entity. I see this as a natural evolution because it will make us stronger. If we can preserve all our qualities that we experience and enjoy in our life today, but we make them by merging ourselves with this new thing that we're creating, it could make us a more advanced form of life form, if you want.So we are the creators, but this is a process of evolution as well. We are evolving to something much more advanced through our own creation. So, there is a circle that feeds into its creation and evolution. They feed into itself, and they are part of the same supply chain circle, if you want.So it's interesting, and I believe that both are true and both are working hand in hand. To produce what we see around us, the entire universe and life forms and everything, there is some kind of interesting way of creation followed by evolution, and they feed into each other. We are there at that point now in our human history. We're creating a new life form.This AI will change the world as we know it in ways that are not even anticipated, but we can't stop it because it's a natural evolution of humans to something more powerful than biological life."Dr. Melvin M. Vopson is Associate Professor of Physics at the University of Portsmouth, Fellow of the Higher Education Academy, Chartered Physicist and Fellow of the Institute of Physics. He is the co-founder and CEO of the Information Physics Institute, editor-in-chief of the IPI Letters and Emerging Minds Journal for Student Research. He is the author of Reality Reloaded: The Scientific Case for a Simulated Universe. Dr. Vopson has a wide-ranging scientific expertise in experimental, applied and theoretical physics that is internationally recognized. He has published over 100 research articles, achieving over 2500 citations.https://www.port.ac.uk/about-us/structure-and-governance/our-people/our-staff/melvin-vopsonhttps://ipipublishing.org/index.php/ipil/RRwww.creativeprocess.info www.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
Are we living in a Simulated Universe? How will AI impact the future of work, society & education?Dr. Melvin M. Vopson is Associate Professor of Physics at the University of Portsmouth, Fellow of the Higher Education Academy, Chartered Physicist and Fellow of the Institute of Physics. He is the co-founder and CEO of the Information Physics Institute, editor-in-chief of the IPI Letters and Emerging Minds Journal for Student Research. He is the author of Reality Reloaded: The Scientific Case for a Simulated Universe. Dr. Vopson has a wide-ranging scientific expertise in experimental, applied and theoretical physics that is internationally recognized. He has published over 100 research articles, achieving over 2500 citations."We are at a crossroads, a paradigm shift with the emergence of artificial intelligence which is going to transform our planet and mankind that is not even anticipated by the people who created AI technology. There are some signs that AI appears to be sentient, and soon it will surpass human brain and mind capacity. So, if you want, we are the creators of a new species. AI is based on silicon and not carbon like we humans are. This is a very interesting aspect. It is a new life form. And you can look at the definition of what it means to have consciousness or something similar. We are very fragile as a species. Could it be that the silicon-based life form is actually something more advanced than the biological carbon-based life form? Could it be that we are at the point where we are creating a life form that may be - by blending biological with this cybernetic entity that we're creating now - creating a post-human? Almost a new form of life that blends biological with machines and silicon technologies and gives us two things? One infinite intelligence that will be exponentially much more powerful in terms of our capacity to communicate, interact, and access information that will give us immortality? You will, just like I take my car to the garage and change parts when they break down, and I can drive this car for unlimited time, as long as I keep changing the parts and servicing it.The same could be a life form that is not entirely based on carbon, but is some kind of blended machine, biological, post-human type of entity. I see this as a natural evolution because it will make us stronger. If we can preserve all our qualities that we experience and enjoy in our life today, but we make them by merging ourselves with this new thing that we're creating, it could make us a more advanced form of life form, if you want.So we are the creators, but this is a process of evolution as well. We are evolving to something much more advanced through our own creation. So, there is a circle that feeds into its creation and evolution. They feed into itself, and they are part of the same supply chain circle, if you want.So it's interesting, and I believe that both are true and both are working hand in hand. To produce what we see around us, the entire universe and life forms and everything, there is some kind of interesting way of creation followed by evolution, and they feed into each other. We are there at that point now in our human history. We're creating a new life form.This AI will change the world as we know it in ways that are not even anticipated, but we can't stop it because it's a natural evolution of humans to something more powerful than biological life."https://www.port.ac.uk/about-us/structure-and-governance/our-people/our-staff/melvin-vopsonhttps://ipipublishing.org/index.php/ipil/RRwww.creativeprocess.info www.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
"Meta reaches between three and four billion people every day through their platforms, right? That's way more people than any government legitimately can claim to govern. And yet this one company with four major platforms that many of us use is able to reach so many people and make decisions about content and access that have real consequences. It's been shown they fueled genocide in multiple places like in Ethiopia and Myanmar. And I think that's exactly why human rights matter because human rights are obligations that states have signed on for, and they're supposed to protect human values. And I think from a human rights perspective, it's important to argue that we shouldn't be collecting certain types of data because it's excessive. It's violating autonomy. It starts violating dignity. And when you start violating autonomy and dignity through the collection of data, you can't just go back and fix that by making it private.”Does privacy exist anymore? Or are humans just sets of data to be traded and sold?Wendy H. Wong is Professor of Political Science and Principal's Research Chair at the University of British Columbia, Okanagan. She is the author of two award-winning books: Internal Affairs: How the Structure of NGOs Transforms Human Rights and (with Sarah S. Stroup) The Authority Trap: Strategic Choices of International NGOs. Her latest book is We, the Data: Human Rights in the Digital Age.www.wendyhwong.comhttps://mitpress.mit.edu/author/wendy-h-wong-38397www.creativeprocess.info www.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
How we think, feel, and experience the world is a mystery. What distinguishes our consciousness from AI and machine learning?Liad Mudrik studies high level cognition and its neural substrates, focusing on conscious experience. She teaches at the School of Psychological Sciences at Tel Aviv University. At her research lab, her team is currently investigating the functionality of consciousness, trying to unravel the depth and limits of unconscious processing, and also researching the ways semantic relations between concepts and objects are formed and detected."Even when I send a query to chat GPT. I always say, 'Hi, can I please ask you something?' And when it replies, I say, 'Thank you.' As if I am kind of treating it as a person who cares about whether I say hi or thank you, although I don't think that it does. I had the privilege to be a part of this group, an interdisciplinary group of philosophers, neuroscientists, and computer scientists. 'Thank It' was led by Patrick Battling and Robert Long, and we met and discussed and corresponded over the possibility of consciousness in AI. We, the field of consciousness studies, relying on theories of consciousness and asking in humans, what are the critical functions that have been ascribed by these theories to conscious processing?So now we can say, give me an AI system. Let me check if it has the indicators that we, in this case, our group has put together as critical for consciousness. If it does have all these factors, all these indicators, I would say that there is at least a good chance that it is either conscious or can develop consciousness. And with that exercise, current AI systems might have 1, 2, 3 indicators out of the 14 that we came up with, but not all of them. It doesn't mean that they cannot have all of them. We didn't find any substantial barrier to coming up with such systems, but currently, they don't. And so I think that although it's very tempting to think about GPT as conscious, it sounds sometimes like a human being, I think that it doesn't have the ability to experience. It can do amazing things. Is there anyone home? so to speak. Is anyone experiencing or, qualitatively, again, for the lack of a better word, experiencing the world? I don't think so. I don't think we have any indication of that."https://people.socsci.tau.ac.il/mu/mudriklabhttps://people.socsci.tau.ac.il/mu/mudriklab/people/#gkit-popupwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
What is the purpose of education? How are we educating students for the future? What is the importance of the humanities in this age of AI and the rapidly changing workplace?Michael S. Roth is President of Wesleyan University. His books include Beyond the University: Why Liberal Education Matters and Safe Enough Spaces: A Pragmatist’s Approach to Inclusion, Free Speech, and Political Correctness on College Campuses. He's been a Professor of History and the Humanities since 1983, was the Founding Director of the Scripps College Humanities Institute, and was the Associate Director of the Getty Research Institute. His scholarly interests center on how people make sense of the past, and he has authored eight books around this topic, including his latest, The Student: A Short History.https://www.wesleyan.edu/academics/faculty/mroth/profile.htmlhttps://yalebooks.yale.edu/book/9780300250039/the-student/www.wesleyan.eduhttps://twitter.com/mroth78www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
"I'm really very glad. I was happy to see that within my lifetime that the prospects of not just Mars, but in fact interstellar space is being taken seriously. I've been at two conferences where we were talking about building the first starship within this century. One of my later books, Arkwright, is about such a project. I saw that Elon Musk is building Starship One, I wish him all the best. And I envy anybody who goes.I wish I were a younger person and in better health. Somebody asked me some time ago, would you go to Mars? And I said, 'I can't do it now. I've got a bum pancreas, and I'm 65 years old, and I'm not exactly the prime prospect for doing this. If you asked me 40 years ago would I go, I would have said: in a heartbeat!' I would gladly leave behind almost everything. I don't think I'd be glad about leaving my wife and family behind, but I'd be glad to go live on another planet, perhaps for the rest of my life, just for the chance to explore a new world, to be one of the settlers in a new world.And I think this is something that's being taken seriously. It is very possible. We've got to be careful about how we do this. And we've got to be careful, particularly about the rationale of the people who are doing this. It bothers me that Elon Musk has lately taken a shift to the Far Right. I don't know why that is. But I'd love to be able to sit down and talk with him about these things and try to understand why he has done such a right thing, but for what seems to be wrong reasons."What does the future of space exploration look like? How can we unlock the opportunities of outer space without repeating the mistakes of colonization and exploitation committed on Earth? How can we ensure AI and new technologies reflect our values and the world we want to live in? Allen Steele is a science fiction author and journalist. He has written novels, short stories, and essays and been awarded a number of Hugos, Asimov's Readers, and Locus Awards. He’s known for his Coyote Trilogy and Arkwright. He is a former member of the Board of Directors and Board of Advisors for the Science Fiction and Fantasy Writers of America. He has also served as an advisor for the Space Frontier Foundation. In 2001, he testified before the Subcommittee on Space and Aeronautics of the U.S. House of Representatives in hearings regarding space exploration in the 21st century.www.allensteele.comwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcastPhoto from a field trip to Pease Air Force Base in Portsmouth NH, now closed. Photo credit: Chuck Peterson
What does the future of space exploration look like? How can we unlock the opportunities of outer space without repeating the mistakes of colonization and exploitation committed on Earth? How can we ensure AI and new technologies reflect our values and the world we want to live in? Allen Steele is a science fiction author and journalist. He has written novels, short stories, and essays and been awarded a number of Hugos, Asimov's Readers, and Locus Awards. He’s known for his Coyote Trilogy and Arkwright. He is a former member of the Board of Directors and Board of Advisors for the Science Fiction and Fantasy Writers of America. He has also served as an advisor for the Space Frontier Foundation. In 2001, he testified before the Subcommittee on Space and Aeronautics of the U.S. House of Representatives in hearings regarding space exploration in the 21st century."I'm really very glad. I was happy to see that within my lifetime that the prospects of not just Mars, but in fact interstellar space is being taken seriously. I've been at two conferences where we were talking about building the first starship within this century. One of my later books, Arkwright, is about such a project. I saw that Elon Musk is building Starship One, I wish him all the best. And I envy anybody who goes.I wish I were a younger person and in better health. Somebody asked me some time ago, would you go to Mars? And I said, 'I can't do it now. I've got a bum pancreas, and I'm 65 years old, and I'm not exactly the prime prospect for doing this. If you asked me 40 years ago would I go, I would have said: in a heartbeat!' I would gladly leave behind almost everything. I don't think I'd be glad about leaving my wife and family behind, but I'd be glad to go live on another planet, perhaps for the rest of my life, just for the chance to explore a new world, to be one of the settlers in a new world.And I think this is something that's being taken seriously. It is very possible. We've got to be careful about how we do this. And we've got to be careful, particularly about the rationale of the people who are doing this. It bothers me that Elon Musk has lately taken a shift to the Far Right. I don't know why that is. But I'd love to be able to sit down and talk with him about these things and try to understand why he has done such a right thing, but for what seems to be wrong reasons."www.allensteele.comwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
"Technology has very much changed the way we read and take in information and shortened it into quick bursts and attention spans. We're living in a new world, for sure. And how do we communicate in this new world? Not just in a way that gets the reach, because there are whole industries aimed at what do I do to get the most likes or the most attention, and all of that, which I don't think is very fulfilling as artists.It's sort of a diminishing of our art form to try and play the game because then we're getting the attention and getting the hits, as opposed to what do I really want to create? How do I really want to create it? How do I want to display this? And can I do it in a way that breaks through so that if I do it my way, it's still going to get the attention, great. But if it doesn't, can I be cool with that? And can I be okay creating what I want to create, knowing that that's what it's about. It's about sharing in an honest, authentic way what I want to express without letting the tentacles of social media drip into my brain and take over why I'm literally doing the things that I'm doing."Max Stossel is an Award-winning poet, filmmaker, and speaker, named by Forbes as one of the best storytellers of the year. His Stand-Up Poetry Special Words That Move takes the audience through a variety of different perspectives, inviting us to see the world through different eyes together. Taking on topics like heartbreak, consciousness, social media, politics, the emotional state of our world, and even how dogs probably (most certainly) talk, Max uses rhyme and rhythm to make these topics digestible and playful. Words That Move articulates the deep-seated kernels of truth that we so often struggle to find words for ourselves. Max has performed on five continents, from Lincoln Center in NY to the Hordern Pavilion in Sydney. He is also the Youth & Education Advisor for the Center for Humane Technology, an organization of former tech insiders dedicated to realigning technology with humanity’s best interests.www.wordsthatmove.com/www.instagram.com/maxstossel/www.humanetech.com https://vimeo.com/690354718/54614a2318www.creativeprocess.info www.oneplanetpodcast.org IG www.instagram.com/creativeprocesspodcast
Max Stossel is an Award-winning poet, filmmaker, and speaker, named by Forbes as one of the best storytellers of the year. His Stand-Up Poetry Special Words That Move takes the audience through a variety of different perspectives, inviting us to see the world through different eyes together. Taking on topics like heartbreak, consciousness, social media, politics, the emotional state of our world, and even how dogs probably (most certainly) talk, Max uses rhyme and rhythm to make these topics digestible and playful. Words That Move articulates the deep-seated kernels of truth that we so often struggle to find words for ourselves. Max has performed on five continents, from Lincoln Center in NY to the Hordern Pavilion in Sydney. He is also the Youth & Education Advisor for the Center for Humane Technology, an organization of former tech insiders dedicated to realigning technology with humanity’s best interests."Technology has very much changed the way we read and take in information and shortened it into quick bursts and attention spans. We're living in a new world, for sure. And how do we communicate in this new world? Not just in a way that gets the reach, because there are whole industries aimed at what do I do to get the most likes or the most attention, and all of that, which I don't think is very fulfilling as artists.It's sort of a diminishing of our art form to try and play the game because then we're getting the attention and getting the hits, as opposed to what do I really want to create? How do I really want to create it? How do I want to display this? And can I do it in a way that breaks through so that if I do it my way, it's still going to get the attention, great. But if it doesn't, can I be cool with that? And can I be okay creating what I want to create, knowing that that's what it's about. It's about sharing in an honest, authentic way what I want to express without letting the tentacles of social media drip into my brain and take over why I'm literally doing the things that I'm doing."www.wordsthatmove.com/www.instagram.com/maxstossel/www.humanetech.com https://vimeo.com/690354718/54614a2318www.creativeprocess.info www.oneplanetpodcast.org IG www.instagram.com/creativeprocesspodcast
 "So there are different parts of the brain responsible for liking and wanting. So wanting is unbelievably robust in the brain. In other words, the neural connections are very robust, and wanting is what drives most addictive behavior. It's when you really want something, like you want a cigarette, you want alcohol, a drug, whatever it is, that's your poison. And actually, screens for some people as well. The liking part. When you say to people, what does it mean to be addicted to something? A lot of people say it's, 'You really like it so much that you just keep going back to it.'It's actually not about liking. What actually happens is that, in the beginning, liking and wanting go together. So let's pick something like a cigarette. If you start smoking in the beginning, you like the experience of smoking, and you also really want the nicotine. You want the cigarette. They go hand in hand, but eventually what happens is the liking is much more fragile, and it decays. And what's left is the wanting. And often in the absence of liking, it's kind of like a bad relationship. Like if you're in a bad romantic relationship, it starts out being about wanting and liking, but then the liking goes away, and you just kind of want to be with a person, even though you know it's undermining your welfare. That's effectively addiction. The real skill today is figuring out how to create space between you and your tech devices."Adam Alter is a Professor of Marketing at NYU’s Stern School of Business and the Robert Stansky Teaching Excellence Faculty Fellow. Adam is the New York Times bestselling author of Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked, and Drunk Tank Pink, which investigates how hidden forces in the world around us shape our thoughts, feelings, and behaviors. He has written for the New York Times, New Yorker, The Atlantic, Washington Post, and a host of TV, radio, and publications. His next book Anatomy of a Breakthrough will be published in 2023.https://adamalterauthor.com www.penguin.co.uk/books/431386/irresistible-by-adam-alter/9781784701659 www.simonandschuster.com/books/Anatomy-of-a-Breakthrough/Adam-Alter/9781982182960www.stern.nyu.edu/faculty/bio/adam-alterwww.creativeprocess.info www.oneplanetpodcast.org IG www.instagram.com/creativeprocesspodcast
“I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”Dr. Raphaël Millière is Assistant Professor in Philosophy of AI at Macquarie University in Sydney, Australia. His research primarily explores the theoretical foundations and inner workings of AI systems based on deep learning, such as large language models. He investigates whether these systems can exhibit human-like cognitive capacities, drawing on theories and methods from cognitive science. He is also interested in how insights from studying AI might shed new light on human cognition. Ultimately, his work aims to advance our understanding of both artificial and natural intelligence.https://raphaelmilliere.comhttps://researchers.mq.edu.au/en/persons/raphael-millierewww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
How can we ensure that AI is aligned with human values? What can AI teach us about human cognition and creativity?Dr. Raphaël Millière is Assistant Professor in Philosophy of AI at Macquarie University in Sydney, Australia. His research primarily explores the theoretical foundations and inner workings of AI systems based on deep learning, such as large language models. He investigates whether these systems can exhibit human-like cognitive capacities, drawing on theories and methods from cognitive science. He is also interested in how insights from studying AI might shed new light on human cognition. Ultimately, his work aims to advance our understanding of both artificial and natural intelligence.“I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”https://raphaelmilliere.comhttps://researchers.mq.edu.au/en/persons/raphael-milliere“I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
“It gets back to this core question. I just wish I was a young scientist going into this because that's the question to answer: Why AI comes out with what it does. That's the burning question. It's like it's bigger than the origin of the universe to me as a scientist, and here's the reason why. The origin of the universe, it happened. That's why we're here. It's almost like a historical question asking why it happened. The AI future is not a historical question. It's a now and future question.I'm a huge optimist for AI, actually. I see it as part of that process of climbing its own mountain. It could do wonders for so many areas of science, medicine. When the car came out, the car initially is a disaster. But you fast forward, and it was the key to so many advances in society. I think it's exactly the same as AI. The big challenge is to understand why it works. AI existed for years, but it was useless. Nothing useful, nothing useful, nothing useful. And then maybe last year or something, now it's really useful. There seemed to be some kind of jump in its ability, almost like a shock wave. We're trying to develop an understanding of how AI operates in terms of these shockwave jumps. Revealing how AI works will help society understand what it can and can't do and therefore remove some of this dark fear of being taken over. If you don't understand how AI works, how can you govern it? To get effective governance, you need to understand how AI works because otherwise you don't know what you're going to regulate.”How can physics help solve messy, real world problems? How can we embrace the possibilities of AI while limiting existential risk and abuse by bad actors?Neil Johnson is a physics professor at George Washington University. His new initiative in Complexity and Data Science at the Dynamic Online Networks Lab combines cross-disciplinary fundamental research with data science to attack complex real-world problems. His research interests lie in the broad area of Complex Systems and ‘many-body’ out-of-equilibrium systems of collections of objects, ranging from crowds of particles to crowds of people and from environments as distinct as quantum information processing in nanostructures to the online world of collective behavior on social media. https://physics.columbian.gwu.edu/neil-johnson https://donlab.columbian.gwu.eduwww.creativeprocess.infowww.oneplanetpodcast.org IG www.instagram.com/creativeprocesspodcast
How can physics help solve messy, real world problems? How can we embrace the possibilities of AI while limiting existential risk and abuse by bad actors?Neil Johnson is a physics professor at George Washington University. His new initiative in Complexity and Data Science at the Dynamic Online Networks Lab combines cross-disciplinary fundamental research with data science to attack complex real-world problems. His research interests lie in the broad area of Complex Systems and ‘many-body’ out-of-equilibrium systems of collections of objects, ranging from crowds of particles to crowds of people and from environments as distinct as quantum information processing in nanostructures to the online world of collective behavior on social media.“It gets back to this core question. I just wish I was a young scientist going into this because that's the question to answer: Why AI comes out with what it does. That's the burning question. It's like it's bigger than the origin of the universe to me as a scientist, and here's the reason why. The origin of the universe, it happened. That's why we're here. It's almost like a historical question asking why it happened. The AI future is not a historical question. It's a now and future question.I'm a huge optimist for AI, actually. I see it as part of that process of climbing its own mountain. It could do wonders for so many areas of science, medicine. When the car came out, the car initially is a disaster. But you fast forward, and it was the key to so many advances in society. I think it's exactly the same as AI. The big challenge is to understand why it works. AI existed for years, but it was useless. Nothing useful, nothing useful, nothing useful. And then maybe last year or something, now it's really useful. There seemed to be some kind of jump in its ability, almost like a shock wave. We're trying to develop an understanding of how AI operates in terms of these shockwave jumps. Revealing how AI works will help society understand what it can and can't do and therefore remove some of this dark fear of being taken over. If you don't understand how AI works, how can you govern it? To get effective governance, you need to understand how AI works because otherwise you don't know what you're going to regulate.”https://physics.columbian.gwu.edu/neil-johnsonhttps://donlab.columbian.gwu.eduwww.creativeprocess.infowww.oneplanetpodcast.org IG www.instagram.com/creativeprocesspodcast
“We've got four billion years of biological accidents that created all of the intricate aspects of everything about life, including consciousness. And it's about what's going on in each of those cells at the time that allows it to be connected to everything else and for the information to be understood as it's being exchanged between those things with their multifaceted, deep, complex processing.”Joseph LeDoux is a Professor of Neural Science at New York University at NYU and was Director of the Emotional Brain Institute. His research primarily focuses on survival circuits, including their impacts on emotions, such as fear and anxiety. He has written a number of books in this field, including The Four Realms of Existence: A New Theory of Being Human, The Emotional Brain, Synaptic Self, Anxious, and The Deep History of Ourselves. LeDoux is also the lead singer and songwriter of the band The Amygdaloids. www.joseph-ledoux.comwww.cns.nyu.edu/ebihttps://amygdaloids.netwww.hup.harvard.edu/books/9780674261259www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
How does the brain process emotions? How are emotional memories formed and stored in the brain, and how do they influence behavior, perception, and decision-making? How does music help us understand our emotions, memories, and the nature of consciousness?Joseph LeDoux is a Professor of Neural Science at New York University at NYU and was Director of the Emotional Brain Institute. His research primarily focuses on survival circuits, including their impacts on emotions, such as fear and anxiety. He has written a number of books in this field, including The Four Realms of Existence: A New Theory of Being Human, The Emotional Brain, Synaptic Self, Anxious, and The Deep History of Ourselves. LeDoux is also the lead singer and songwriter of the band The Amygdaloids. “We've got four billion years of biological accidents that created all of the intricate aspects of everything about life, including consciousness. And it's about what's going on in each of those cells at the time that allows it to be connected to everything else and for the information to be understood as it's being exchanged between those things with their multifaceted, deep, complex processing.”www.joseph-ledoux.comwww.cns.nyu.edu/ebihttps://amygdaloids.netwww.hup.harvard.edu/books/9780674261259www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcastMusic courtesy of Joseph LeDoux
“AI is brilliant at cognitive empathy. However, the next kind is emotional empathy. Emotional empathy means: I know what you feel because I'm feeling it too. And this has to do with circuitry in the fore part of the brain, which creates a brain-to-brain circuit that's automatic, unconscious, and instantaneous. And emotions pass very  well across that. I think AI might flunk here because it has no emotion. It can mimic empathy, but it doesn't really feel empathy. The third kind is empathic concern. Technically, it means caring. It's the basis of love. It's the same circuitry as a parent's love for a child, actually. But I think that leaders need this very much. AI has no emotion, so it doesn't have emotional self-awareness. It can't tune in. I don't think it can be empathic because AI is a set of codes, basically. It doesn't have the ability to manage emotion because it doesn't have emotion. It's interesting. I was just talking to a group at Microsoft, which is one of the leading developers of AI, and one of the people there was talking about Inculcating love into AI or caring into AI as maybe an antidote to the negative potential of AI for humanity. But I think there will always be room for the human, for a leader. I don't think that people will find that they can trust AI the same way they can trust a leader who cares.”Daniel Goleman is an American psychologist, author, and science journalist. Before becoming an author, Goleman was a science reporter for the New York Times for 12 years, covering psychology and the human brain. In 1995, Goleman published Emotional Intelligence, a New York Times bestseller. In his newly published book Optimal, Daniel Goleman discusses how people can enter an optimal state of high performance without facing symptoms of burnout in the workplace.www.danielgoleman.infowww.harpercollins.com/products/optimal-daniel-golemancary-cherniss?variant=41046795288610www.penguinrandomhouse.com/books/69105/emotional-intelligence-by-daniel-goleman/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
How can we enhance our emotional intelligence and avoid burnout in a changing world? How can we regain focus and perform in an optimal state? What do we mean by ecological intelligence?Daniel Goleman is an American psychologist, author, and science journalist. Before becoming an author, Goleman was a science reporter for the New York Times for 12 years, covering psychology and the human brain. In 1995, Goleman published Emotional Intelligence, a New York Times bestseller. In his newly published book Optimal, Daniel Goleman discusses how people can enter an optimal state of high performance without facing symptoms of burnout in the workplace.“AI is brilliant at cognitive empathy. However, the next kind is emotional empathy. Emotional empathy means: I know what you feel because I'm feeling it too. And this has to do with circuitry in the fore part of the brain, which creates a brain-to-brain circuit that's automatic, unconscious, and instantaneous. And emotions pass very  well across that. I think AI might flunk here because it has no emotion. It can mimic empathy, but it doesn't really feel empathy. The third kind is empathic concern. Technically, it means caring. It's the basis of love. It's the same circuitry as a parent's love for a child, actually. But I think that leaders need this very much. AI has no emotion, so it doesn't have emotional self-awareness. It can't tune in. I don't think it can be empathic because AI is a set of codes, basically. It doesn't have the ability to manage emotion because it doesn't have emotion. It's interesting. I was just talking to a group at Microsoft, which is one of the leading developers of AI, and one of the people there was talking about Inculcating love into AI or caring into AI as maybe an antidote to the negative potential of AI for humanity. But I think there will always be room for the human, for a leader. I don't think that people will find that they can trust AI the same way they can trust a leader who cares.”www.danielgoleman.infowww.harpercollins.com/products/optimal-daniel-golemancary-cherniss?variant=41046795288610www.penguinrandomhouse.com/books/69105/emotional-intelligence-by-daniel-goleman/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
loading
Comments 
Download from Google Play
Download from App Store