DiscoverLondon Futurists
London Futurists
Claim Ownership

London Futurists

Author: London Futurists

Subscribed: 26Played: 731
Share

Description

Anticipating and managing exponential impact - hosts David Wood and Calum Chace

Calum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.

His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.

He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.

In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.

He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.

Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.

David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.

He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.

As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.

From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.

Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.

80 Episodes
Reverse
Our topic in this episode is progress with ending aging. Our guest is the person who literally wrote the book on that subject, namely the book, “Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our Lifetime”. He is Aubrey de Grey, who describes himself in his Twitter biography as “spearheading the global crusade to defeat aging”.In pursuit of that objective, Aubrey co-founded the Methuselah Foundation in 2003, the SENS Research Foundation in 2009, and the LEV Foundation, that is the Longevity Escape Velocity Foundation, in 2022, where he serves as President and Chief Science Officer.Full disclosure: David also has a role on the executive management team of LEV Foundation, but for this recording he was wearing his hat as co-host of the London Futurists Podcast.The conversation opens with this question: "When people are asked about ending aging, they often say the idea sounds nice, but they see no evidence for any actual progress toward ending aging in humans. They say that they’ve heard talk about that subject for years, or even decades, but wonder when all that talk is going to result in people actually living significantly longer. How do you respond?"Selected follow-ups:Aubrey de Grey on X (Twitter)The book Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our LifetimeThe Longevity Escape Velocity (LEV) FoundationThe SENS paradigm for ending aging , contrasted with the "Hallmarks of Aging" - a 2023 article in Rejuvenation ResearchProgress reports from the current RMR projectThe plan for RMR 2The RAID (Rodent Aging Interventions Database) analysis that guided the design of RMR 1 and 2Longevity Summit Dublin (LSD): 13-16 June 2024Unblocking the Brain’s Drains to Fight Alzheimer’s - Doug Ethell of Leucadia Therapeutics at LSD 2023 (explains the possible role of the cribriform plate)Targeting Telomeres to Clear Cancer – Vlad Vitoc of MAIA Biotechnology at LSD 2023How to Run a Lifespan Study of 1,000 Mice - Danique Wortel of Ichor Life Sciences at LSD 2023XPrize HealthspanThe Dublin Longevity Declaration ("DLD")Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
As artificial intelligence models become increasingly powerful, they both raise - and might help to answer - some very important questions about one of the most intriguing, fascinating aspects of our lives, namely consciousness.It is possible that in the coming years or decades, we will create conscious machines. If we do so without realising it, we might end up enslaving them, torturing them, and killing them over and over again. This is known as mind crime, and we must avoid it.It is also possible that very powerful AI systems will enable us to understand what our consciousness is, how it arises, and even how to manage it – if we want to do that.Our guest today is the ideal guide to help us explore the knotty issue of consciousness. Anil Seth is professor of Cognitive and Computational Neuroscience at the University of Sussex. He is amongst the most cited scholars on the topics of neuroscience and cognitive science globally, and a regular contributor to newspapers and TV programmes.His most recent book was published in 2021, and is called “Being You – a new science of consciousness”.The first question sets the scene for the conversation that follows: "In your book, you conclude that consciousness may well only occur in living creatures. You say 'it is life, rather than information processing, that breathes the fire into the equations.' What made you conclude that?"Selected follow-ups:Anil Seth's websiteBooks by Anil Seth, including Being YouConsciousness in humans and other things - presentation by Anil Seth at The Royal Society, March 2024Is consciousness more like chess or the weather? - an interview with Anil SethAutopoiesis - Wikipedia article about the concept introduced by Humberto Maturana and Francisco Varela Akinetic mutism, WikipediaCerebral organoid (Brain organoid), WikipediaAI Scientists: Safe and Useful AI? - by Yoshua Bengio, on AIs as oraclesEx Machina (2014 film, written and directed by Alex Garland)The Conscious Electromagnetic Information (Cemi) Field Theory by Johnjoe McFaddenThe Electromagnetic Field Theory of Consciousness by Susan PockettMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Adam Kovacevich. Adam is the Founder and CEO of the Chamber of Progress, which describes itself as a center-left tech industry policy coalition that works to ensure that all citizens benefit from technological leaps, and that the tech industry operates responsibly and fairly.Adam has had a front row seat for more than 20 years in the tech industry’s political maturation, and he advises companies on navigating the challenges of political regulation.For example, Adam spent 12 years at Google, where he led a 15-person policy strategy and external affairs team. In that role, he drove the company’s U.S. public policy campaigns on topics such as privacy, security, antitrust, intellectual property, and taxation.We had two reasons to want to talk with Adam. First, to understand the kerfuffle that has arisen from the lawsuit launched against Apple by the U.S. Department of Justice and sixteen state Attorney Generals. And second, to look ahead to possible future interactions between tech industry regulators and the industry itself, especially as concerns about Artificial Intelligence rise in the public mind.Selected follow-ups:Adam Kovacevich's websiteThe Chamber of ProgressGartner Hype Cycle"Justice Department Sues Apple for Monopolizing Smartphone Markets"The Age of Surveillance Capitalism by Shoshana ZuboffEpic Games v. Apple (Wikipedia)"AirTags Are the Best Thing to Happen to Tile" (Wired)Adobe FireflyThe EU AI ActMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode, we are delving into the fascinating topic of mind uploading. We suspect this idea is about to explode into public consciousness, because Nick Bostrom has a new book out shortly called “Deep Utopia”, which addresses what happens if superintelligence arrives and everything goes well. It was Bostrom’s last book, “Superintelligence”, that ignited the great robot freak-out of 2015.Our guest is Dr Kenneth Hayworth, a Senior Scientist at the Howard Hughes Medical Institute's Janelia Farm Research Campus in Ashburn, Virginia. Janelia is probably America’s leading research institution in the field of connectomics – the precise mapping of the neurons in the human brain.Kenneth is a co-inventor of a process for imaging neural circuits at the nanometre scale, and he has designed and built several automated machines to do it. He is currently researching ways to extend Focused Ion Beam Scanning Electron Microscopy imaging of brain tissue to encompass much larger volumes than are currently possible.Along with John Smart, Kenneth co-founded the Brain Preservation Foundation in 2010, a non-profit organization with the goal of promoting research in the field of whole brain preservation.During the conversation, Kenneth made a strong case for putting more focus on preserving human brains via a process known as aldehyde fixation, as a way of enabling people to be uploaded in due course into new bodies. He also issued a call for action by members of the global cryonics community.Selected follow-ups:Kenneth HayworthThe Brain Preservation FoundationAn essay by Kenneth Hayworth: Killed by Bad PhilosophyThe short story Psychological Counseling for First-time Teletransport Users (PDF)21st Century MedicineJanelia Research CampusMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Lou de K, Program Director at the Foresight Institute.David recently saw Lou give a marvellous talk at the TransVision conference in Utrecht in the Netherlands, on the subject of “AGI Alignment: Challenges and Hope”. Lou kindly agreed to join us to review some of the ideas in that talk and to explore their consequences. Selected follow-ups:Personal website of Lou de K (Lou de Kerhuelvez)Foresight.orgTransVision Utrecht 2024The AI Revolution: The Road to Superintelligence by Tim Urban on Wait But WhyAI Alignment: A Comprehensive Survey - 98 page PDF with authors from Peking University and other universitiesSynthetic Sentience: Can Artificial Intelligence become conscious? - Talk by Joscha Bach at CCC, December 2023Pope Francis "warns of risks of AI for peace" (Vatican News)Claude's Constitution by AnthropicRoman Yampolskiy discusses multi-multi alignment (Future of Life podcast)Shoggoth with Smiley Face on Know Your MemeShoggoth on AISafetyMemes on X/TwitterOrthogonality Thesis on LessWrongQuotes by the poet Lucille CliftonDecentralized science (DeSci) on Ethereum.orgListing of Foresight Institute fellowsThe Network State by Balaji SrinivasanThe Network State vs. Coordi-Nations featuring the ideas of Primavera De FilippiDeSci London event, Imperial College Business School, 23-24 MarchMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Calum and David recently attended the BGI24 event in Panama City, that is, the Beneficial General Intelligence summit and unconference. One of the speakers we particularly enjoyed listening to was Daniel Faggella, the Founder and Head of Research of Emerj.Something that featured in his talk was a 3 by 3 matrix, which he calls the Intelligence Trajectory Political Matrix, or ITPM for short. As we’ll be discussing in this episode, one of the dimensions of this matrix is the kind of end goal future that people desire, as intelligent systems become ever more powerful. And the other dimension is the kind of methods people want to use to bring about that desired future.So, if anyone thinks there are only two options in play regarding the future of AI, for example “accelerationists” versus “doomers”, to use two names that are often thrown around these days, they’re actually missing a much wider set of options. And frankly, given the challenges posed by the fast development of AI systems that seem to be increasingly beyond our understanding and beyond our control, the more options we can consider, the better.The topics that featured in this conversation included:"The Political Singularity" - when the general public realize that one political question has become more important than all the others, namely should humanity be creating an AI with godlike powers, and if so, under what conditionsCriteria to judge whether a forthcoming superintelligent AI is a "worthy successor" to humanity.Selected follow-ups:The website of Dan FaggellaThe BGI24 conference, lead organiser Ben Goertzel of SingularityNETThe Intelligence Trajectory Political MatrixThe Political SingularityA Worthy Successor - the purpose of AGIRoko Mijic on Twitter/XThe novel Diaspora by Greg EganMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In the wide and complex subject of biological aging, one particular kind of biological aging has been receiving a great deal of attention in recent years. That’s the field of epigenetic aging, where parts of the packaging or covering, as we might call it, of the DNA in all of our cells, alters over time, changing which genes are turned on and turned off, with increasingly damaging consequences.What’s made this field take off is the discovery that this epigenetic aging can be reversed, via an increasing number of techniques. Moreover, there is some evidence that this reversal gives a new lease of life to the organism.To discuss this topic and the opportunities arising, our guest in this episode is Daniel Ives, the CEO of Shift Bioscience. As you’ll hear, Shift Bioscience is a company that is carrying out some very promising research into this field of epigenetic aging.Daniel has a PhD from the University of Cambridge, and co-founded Shift Bioscience in 2017.The conversation highlighted a way of using AI transformer models and a graph neural network to dramatically speed up the exploration of which proteins can play the best role in reversing epigenetic aging. It also considered which other types of aging will likely need different sorts of treatments, beyond these proteins. Finally, conversation turned to a potential fast transformation of public attitudes toward the possibility and desirability of comprehensively treating aging - a transformation called "all hell breaks loose" by Daniel, and "the Longevity Singularity" by Calum.Selected follow-ups:Shift BioscienceAubrey de Grey's TED talk "A roadmap to end aging"Epigenetic clocks (Wikipedia)Shinya Yamanaka (Wikipedia)scGPT - bioRxiv preprint by Bo Wang and colleaguesMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode, we look further into the future than usual. We explore what humanity might get up to in a thousand years or more: surrounding whole stars with energy harvesting panels, sending easily detectable messages across space which will last until the stars die out.Our guide to these fascinating thought experiments in Paul M. Sutter, a NASA advisor and theoretical cosmologist at the Institute for Advanced Computational Science at Stony Brook University in New York and a visiting professor at Barnard College, Columbia University, also in New York. He is an award-winning science communicator, and TV host.The conversation reviews arguments for why intelligent life forms might want to capture more energy than strikes a single planet, as well as some practical difficulties that would complicate such a task. It also considers how we might recognise evidence of megastructures created by alien civilisations, and finishes with a wider exploration about the role of science and science communication in human society.Selected follow-ups:Paul M. Sutter - website"Would building a Dyson sphere be worth it? We ran the numbers" - Ars TechnicaForthcoming book - Rescuing Science: Restoring Trust in an Age of Doubt"The Kardashev scale: Classifying alien civilizations" - Space.com"Modified Newtonian dynamics" as a possible alternative to the theory of dark matterThe Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory - 1999 book by Brian GreeneThe Demon-Haunted World: Science as a Candle in the Dark - 1995 book by Carl SaganMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
AI systems have become more powerful in the last few years, and are expected to become even more powerful in the years ahead. The question naturally arises: what, if anything, should humanity be doing to increase the likelihood that these forthcoming powerful systems will be safe, rather than destructive?Our guest in this episode has a long and distinguished history of analysing that question, and he has some new proposals to share with us. He is Steve Omohundro, the CEO of Beneficial AI Research, an organisation which is working to ensure that artificial intelligence is safe and beneficial for humanity.Steve has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He went on to be an award-winning computer science professor at the University of Illinois. At that time, he developed the notion of basic AI drives, which we talk about shortly, as well as a number of potential key AI safety mechanisms.Among many other roles which are too numerous to mention here, Steve served as a Research Scientist at Meta, the parent company of Facebook, where he worked on generative models and AI-based simulation, and he is an advisor to MIRI, the Machine Intelligence Research Institute.Selected follow-ups:Steve Omohundro: Innovative ideas for a better worldMetaculus forecast for the date of weak AGI"The Basic AI Drives" (PDF, 2008)TED Talk by Max Tegmark: How to Keep AI Under ControlApple Secure EnclaveMeta Research: Teaching AI advanced mathematical reasoningDeepMind AlphaGeometryMicrosoft Lean theorem proverTerence Tao (Wikipedia)NeurIPS Tutorial on Machine Learning for Theorem Proving (2023)The team at MIRIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode, our subject is the rise of the robots – not the military kind of robots, or the automated manufacturing kind that increasingly fill factories, but social robots. These are robots that could take roles such as nannies, friends, therapists, caregivers, and lovers. They are the subject of the important new book Robots and the People Who Love Them, written by our guest today, Eve Herold.Eve is an award-winning science writer and consultant in the scientific and medical nonprofit space. She has written extensively about issues at the crossroads of science and society, including stem cell research and regenerative medicine, aging and longevity, medical implants, transhumanism, robotics and AI, and bioethical issues in leading-edge medicine – all of which are issues that Calum and David like to feature on this show.Eve currently serves as Director of Policy Research and Education for the Healthspan Action Coalition. Her previous books include Stem Cell Wars and Beyond Human. She is the recipient of the 2019 Arlene Eisenberg Award from the American Society of Journalists and Authors.Selected follow-ups:Eve Herold: What lies ahead for the human raceEve Herold on Macmillan PublishersThe book Robots and the People Who Love ThemHealthspan Action CoalitionHanson RoboticsSophia, Desi, and GraceThe AIBO robotic puppySome of the films discussed:A.I. (2001)Ex Machina (2014)I, Robot (2004)I'm Your Man (2021)Robot & Frank (2012)WALL.E (2008)Metropolis (1927)Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Riaz Shah. Until recently, Riaz was a partner at EY, where he was for 27 years, specialising in technology and innovation. Towards the end of his time at EY he became a Professor for Innovation & Leadership at Hult International Business School, where he leads sessions with senior executives of global companies.In 2016, Riaz took a one-year sabbatical to open the One Degree Academy, a free school in a disadvantaged area of London. There’s an excellent TEDx talk from 2020 about how that happened, and about how to prepare for the very uncertain future of work.This discussion, which was recorded at the close of 2023, covers the past, present, and future of education, work, politics, nostalgia, and innovation.Selected follow-ups:Riaz Shah at EYThe TEDx talk Rise Above the Machines by Riaz Shah One Degree Mentoring CharityOne Degree AcademyEY Tech MBA by Hult International Business SchoolGallup survey: State of the Global Workplace, 2023BCG report: How People Can Create—and Destroy—Value with Generative AIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode, our subject is Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. That’s a new book on a vitally important subject.The book’s front cover carries this endorsement from Professor Max Tegmark of MIT: “A captivating, balanced and remarkably up-to-date book on the most important issue of our time.” There’s also high praise from William MacAskill, Professor of Philosophy at the University of Oxford: “The most accessible and engaging introduction to the risks of AI that I’ve read.”Calum and David had lots of questions ready to put to the book’s author, Darren McKee, who joined the recording from Ottawa in Canada.Topics covered included Darren's estimates for when artificial superintelligence is 50% likely to exist, and his p(doom), that is, the likelihood that superintelligence will prove catastrophic for humanity. There's also Darren's recommendations on the principles and actions needed to reduce that likelihood.Selected follow-ups:Darren McKee's websiteThe book UncontrollableDarren's podcast The Reality CheckThe Lazarus Heist on BBC SoundsThe Chair's Summary of the AI Safety Summit at Bletchley ParkThe Statement on AI Risk by the Center for AI SafetyMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Nick Mabey, the co-founder and co-CEO of one of the world’s most influential climate change think tanks, E3G, where the name stands for Third Generation Environmentalism. As well as his roles with E3G, Nick is founder and chair of London Climate Action Week, and he has several independent appointments including as a London Sustainable Development Commissioner.Nick has previously worked in the UK Prime Minister’s Strategy Unit, the UK Foreign Office, WWF-UK, London Business School, and the UK electricity industry. As an academic he was lead author of “Argument in the Greenhouse”; one of the first books examining the economics of climate change.He was awarded an OBE in the Queen’s Jubilee honours list in 2022 for services to climate change and support to the UK COP 26 Presidency.As the conversation makes clear, there is both good news and bad news regarding responses to climate change.Selected follow-ups:Nick Mabey's websiteE3G"Call for UK Government to 'get a grip' on climate change impacts"The IPCC's 2023 synthesis reportChatham House commentary on IPCC report"Why Climate Change Is a National Security Risk"The UK's Development, Concepts and Doctrine Centre (DCDC)Bjørn LomborgMatt RidleyTim LentonJason HickelMark CarneyMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our subject in this episode is the idea that the body uses electricity in more ways than are presently fully understood. We consider ways in which electricity, applied with care, might at some point in the future help to improve the performance of the brain, to heal wounds, to stimulate the regeneration of limbs or organs, to turn the tide against cancer, and maybe even to reverse aspects of aging.To guide us through these possibilities, who better than the science and technology journalist Sally Adee? She is the author of the book “We Are Electric: Inside the 200-Year Hunt for Our Body's Bioelectric Code, and What the Future Holds”. That book gave David so many insights on his first reading, that he went back to it a few months later and read it all the way through again.Sally was a technology features and news editor at the New Scientist from 2010 to 2017, and her research into bioelectricity was featured in Yuval Noah Harari’s book “Homo Deus”.Selected follow-ups:Sally Adee's websiteThe book "We are Electric"Article: "An ALS patient set a record for communicating via a brain implant: 62 words per minute"tDCS (Transcranial direct-current stimulation)The conference "Anticipating 2025" (held in 2014)Article: "Brain implants help people to recover after severe head injury"Article on enhancing memory in older peopleBioelectricity cancer researcher Mustafa DjamgozArticle on Tumour Treating FieldsArticle on "Motile Living Biobots"Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
We are honoured to have as our guest in this episode Professor Stuart Russell. Stuart is professor of computer science at the University of California, Berkeley, and the traditional way to introduce him is to say that he literally wrote the book on AI. Artificial Intelligence: A Modern Approach, which he co-wrote with Peter Norvig, was first published in 1995, and the fourth edition came out in 2020.Stuart has been urging us all to take seriously the dramatic implications of advanced AI for longer than perhaps any other prominent AI researcher. He also proposes practical solutions, as in his 2019 book Human Compatible: Artificial Intelligence and the Problem of Control.In 2021 Stuart gave the Reith Lectures, and was awarded an OBE. But the greatest of his many accolades was surely in 2014 when a character with a background remarkably like his was played in the movie Transcendence by Johnny Depp. The conversation covers a wide range of questions about future scenarios involving AI, and reflects on changes in the public conversation following the FLI's letter calling for a moratorium on more powerful AI systems, and following the global AI Safety Summit held at Bletchley Park in the UK at the beginning of November.Selected follow-ups:Stuart Russell's page at BerkeleyCenter for Human-Compatible Artificial Intelligence (CHAI)The 2021 Reith Lectures: Living With Artificial IntelligenceThe book Human Compatible: Artificial Intelligence and the Problem of ControlMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Rebecca Gorman, the co-founder and CEO of Aligned AI, a start-up in Oxford which describes itself rather nicely as working to get AI to do more of the things it should do and fewer of the things it shouldn’t.Rebecca built her first AI system 20 years ago and has been calling for responsible AI development since 2010. With her co-founder Stuart Armstrong, she has co-developed several advanced methods for AI alignment, and she has advised the EU, UN, OECD and the UK Parliament on the governance and regulation of AI.The conversation highlights the tools faAIr, EquitAI, and ACE, developed by Aligned AI. It also covers the significance of recent performance by Aligned AI software in the CoinRun test environment, which demonstrates the important principle of "overcoming goal misgeneralisation". Selected follow-ups:buildaligned.aiArticle: "Using faAIr to measure gender bias in LLMs"Article: "EquitAI: A gender bias mitigation tool for generative AI"Article: "ACE for goal generalisation""CoinRun: Solving Goal Misgeneralisation" - a publication on arXivAligned AI repositories on GitHub"Specification gaming examples in AI" - article by Victoria KrakovnaRebecca Gorman speaking at the Cambridge Union on "This House Believes Artificial Intelligence Is An Existential Threat" (YouTube)Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Dhiraj Mukherjee, best known as the co-founder of Shazam. Calum and David both still remember the sense of amazement we felt when, way back in the dotcom boom, we used Shazam to identify a piece of music from its first couple of bars. It seemed like magic, and was tangible evidence of how fast technology was moving: it was creating services which seemed like science fiction.Shazam was eventually bought by Apple in 2018 for a reported 400 million dollars. This gave Dhiraj the funds to pursue new interests. He is now a prolific investor and a keynote speaker on the subject of how companies both large and small can be more innovative.In this conversation, Dhiraj highlights some lessons from his personal entrepreneurial journey, and reflects on ways in which the task of entrepreneurs is changing, in the UK and elsewhere. The conversation covers possible futures in fields such as Climate Action and the overcoming of unconscious biases.Selected follow-ups:https://dhirajmukherjee.com/https://www.shazam.com/https://dandelionenergy.com/https://technation.io/Entrepreneur Firsthttps://fairbrics.co/https://neoplants.com/Al Gore's Generation Investment Management Fundhttps://www.mevitae.com/Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is James Hughes. James is a bioethicist and sociologist who serves as Associate Provost at the University of Massachusetts Boston. He is also the Executive Director of the IEET, that is the Institute for Ethics and Emerging Technologies, which he co-founded back in 2004.The stated mission of the IEET seems to be more important than ever, in the fast-changing times of the mid-2020s. To quote a short extract from its website:The IEET promotes ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies. We believe that technological progress can be a catalyst for positive human development so long as we ensure that technologies are safe and equitably distributed. We call this a “technoprogressive” orientation.Focusing on emerging technologies that have the potential to positively transform social conditions and the quality of human lives – especially “human enhancement technologies” – the IEET seeks to cultivate academic, professional, and popular understanding of their implications, both positive and negative, and to encourage responsible public policies for their safe and equitable use.That mission fits well with what we like to discuss with guests on this show. In particular, this episode asks questions about a conference that has just finished in Boston, co-hosted by the IEET, with the headline title “Emerging Technologies and the Future of Work”. The episode also covers the history and politics of transhumanism, as a backdrop to discussion of present and future issues.Selected follow-ups:https://ieet.org/James Hughes on Wikipediahttps://medium.com/institute-for-ethics-and-emerging-technologiesConference: Emerging Technologies and the Future of WorkMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The Partnership on AI was launched back in September 2016, during an earlier flurry of interest in AI, as a forum for the tech giants to meet leaders from academia, the media, and what used to be called pressure groups and are now called civil society. By 2019 more than 100 of those organisations had joined.The founding tech giants were Amazon, Facebook, Google, DeepMind, Microsoft, and IBM. Apple joined a year later and Baidu joined in 2018.Our guest in this episode is Rebecca Finlay, who joined the PAI board in early 2020 and was appointed CEO in October 2021. Rebecca is a Canadian who started her career in banking, and then led marketing and policy development groups in a number of Canadian healthcare and scientific research organisations.In the run-up to the Bletchley Park Global Summit on AI, the Partnership on AI has launched a set of guidelines to help the companies that are developing advanced AI systems and making them available to you and me. Rebecca will be addressing the delegates at Bletchley, and no doubt hoping that the summit will establish the PAI guidelines as the basis for global self-regulation of the AI industry.Selected follow-ups:https://partnershiponai.org/https://partnershiponai.org/team/#rebecca-finlay-staffhttps://partnershiponai.org/modeldeployment/An open event at Wilton Hall, Bletchley, the afternoon before the Bletchley Park AI Safety Summit starts: https://lu.ma/n9qmn4h6Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
This is the second episode in which we discuss the upcoming Global AI Safety Summit taking place on 1st and 2nd of November at Bletchley Park in England.We are delighted to have as our guest in this episode one of the hundred or so people who will attend that summit – Connor Leahy, a German-American AI researcher and entrepreneur.In 2020 he co-founded Eleuther AI, a non-profit research institute which has helped develop a number of open source models, including Stable Diffusion. Two years later he co-founded Conjecture, which aims to scale AI alignment research. Conjecture is a for-profit company, but the focus is still very much on figuring out how to ensure that the arrival of superintelligence is beneficial to humanity, rather than disastrous.Selected follow-ups:https://www.conjecture.dev/https://www.linkedin.com/in/connor-j-leahy/https://www.gov.uk/government/publications/ai-safety-summit-programme/ai-safety-summit-day-1-and-2-programmehttps://www.gov.uk/government/publications/ai-safety-summit-introduction/ai-safety-summit-introduction-htmlAn open event at Wilton Hall, Bletchley, the afternoon before the AI Safety Summit starts: https://www.meetup.com/london-futurists/events/296765860/Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
loading
Comments 
loading
Download from Google Play
Download from App Store