DiscoverLondon Futurists
London Futurists
Claim Ownership

London Futurists

Author: London Futurists

Subscribed: 11Played: 158
Share

Description

Anticipating and managing exponential impact - hosts David Wood and Calum Chace
32 Episodes
Reverse
In the last few weeks, the pace of change in AI has been faster than ever before. The changes aren't just announcements of future capabilities - announcements that could have been viewed, perhaps, as hype. The changes are new versions of AI systems that are available for users around the world to experiment with, directly, here and now. These systems are being released by multiple different companies, and also by open-source collaborations. And users of these systems are frequently expressing surprise: the systems are by no means perfect, but they regularly out-perform previous expectations, sometimes in astonishing ways.In this episode, Calum Chace and David Wood, the co-hosts of this podcast series, discuss the wider implications of these new AI systems. David asks Calum if he has changed any of his ideas about what he has called "the two singularities", namely the Economic Singularity and the Technological Singularity, as covered in a number of books he has written.Calum has been a full-time writer and speaker on the subject of AI since 2012. Earlier in his life, he studied philosophy, politics, and economics at Oxford University, and trained as a journalist at the BBC. He wrote a column in the Financial Times and nowadays is a regular contributor to Forbes magazine. In between, he held a number of roles in business, including leading a media practice at KPMG. In the last few days, he has been taking a close look at GPT-4.Selected follow-up reading:https://calumchace.com/the-economic-singularity/https://calumchace.com/surviving-ai-synopsis/Topics in this conversation include:*) Is the media excitement about GPT-4 and its predecessor ChatGPT overblown, or are these systems signs of truly important disruptions?*) How do these new AI systems compare with earlier AIs?*) The two "big bangs" in AI history*) How transformers work*) The difference between self-supervised learning and supervised learning*) The significance of OpenAI enabling general public access to ChatGPT*) Market competition between Microsoft Bing and Google Search*) Unwholesome replies by Microsoft Sydney and Google Bard - and the intended role of RLHF (Reinforcement Learning with Human Feedback)*) How basic reasoning seems to emerge (unexpectedly) from pattern recognition at sufficient scale*) Examples of how the jobs of knowledge workers are being changed by GPT-4*) What will happen to departments where each human knowledge workers has a tenfold productivity boost?*) From the job churns of the past to the Great Churn of the near future*) The forthcoming wave of automation is not only more general than past waves, but will also proceed at a much faster pace*) Improvements in the writing AI produces, such as book chapters*) Revisions of timelines for the Economic and Technological Singularity?*) It now seems that human intelligence is less hard to replicate than was previously thought*) The Technological Singularity might arrive before an Economic Singularity*) The liberating vision of people no longer needing to be wage slaves, and the threat of almost everyone living in poverty*) The insufficiency of UBI (Universal Basic Income) unless an economy of abundance is achieved (bringing the costs of goods and services down toward zero)*) Is the creation of AI now out of control, with a rush to release new versions?*) The infeasibility of the idea of AGI relinquishment*) OpenAI's recent actions assessed*) Expectations for new AI releases in the remainder of 2023: accelerating paceMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Ben Goertzel is a cognitive scientist and artificial intelligence researcher. He is CEO and founder of SingularityNET, leader of the OpenCog Foundation, and chair of Humanity+.Ben is perhaps best-known for popularising the term 'artificial general intelligence', or AGI, a machine with all the cognitive abilities of an adult human. He thinks that the way to create this machine is to start with a baby-like AI, and raise it, as we raise children. We would do this either in VR, or in robot form. Hence he works with the robot-builder David Hanson to create robots like Sophia and Grace.Ben is a unique and engaging speaker, and gives frequent keynotes all round the world. Both his appearance and his views have been described as counter-cultural. In this episode, we hear about Ben's vision for the creation of benevolent decentralized AGI.Selected follow-up reading:https://singularitynet.io/http://goertzel.org/http://multiverseaccordingtoben.blogspot.com/Topics in this conversation include:*) Occasional hazards of humans and robots working together*) "The future is already here, it's just not wired together properly"*) Ben's definition of AGI*) Ways in which humans lack "general intelligence"*) Changes in society expected when AI reaches "human level"*) Is there "one key thing" which will enable the creation of AGI?*) Ben's OpenCog Hyperon project combines three approaches: neural pattern recognition and synthesis, rigorous symbolic reasoning, and  evolutionary creativity*) Parallel combinations versus sequential combinations of AI capabilities: why the former is harder, but more likely to create AGI*) Three methods to improve the scalability of AI algorithms: mathematical innovations, efficient concurrent processing, and an AGI hardware board*) "We can reach the Singularity in ten years if we really, really try"*) ... but humanity has, so far, not "really tried" to apply sufficient resources to creating AGI*) Sam Altman: "If you talk about the upsides of what AGI could do for us, you sound like a crazy person"*) "The benefits of AGI will challenge our concept of 'what is a benefit'"*) Options for human life trajectories, if AGIs are well disposed towards humans*) We will be faced with the questions of "what do we want" and "what are our values"*) The burning issue is "what is the transition phase" to get to AGI*) Ben's disagreements with Nick Bostrom and Eliezer Yudkowsky*) Assessment of the approach taken by OpenAI to create AGI*) Different degrees of faith in big tech companies as a venue for hosting the breakthroughs in creating AGI*) Should OpenAI be renamed as "ClosedAI"?*) The SingularityNET initiative to create a decentralized, democratically controlled infrastructure for AGI*) The development of AGI should be "more like Linux or the Internet than Windows or the mobile phone ecosystem"*) Limitations of neural net systems in self-understanding*) Faith in big tech and capitalism vs. faith in humanity as a whole vs. faith in reward maximization as a paradigm for intelligence*) Open-ended intelligence vs. intelligence created by reward maximization*) A concern regarding Effective Altruism*) There's more to intelligence than pursuit of an overarching goal*) A broader view of evolution than drives to survive and to reproduce*) "What the fate of humanity depends on" - selecting the right approach to the creation of AGIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
At a time when many people find it depressingly easy to see how "bad futures" could arise, what is a credible narrative of a "good future"? That question is of central concern to our guest in this episode, Gerd Leonhard.Gerd is one of the most successful futurists on the international speaker circuit. He estimates that he has spoken to a combined audience of 2.5 million people in more than 50 countries.He left his home country of Germany in 1982 to go to the USA to study music. While he was in the US, he set up one of the first internet-based music businesses, and then he parlayed that into his current speaking career. His talks and videos are known for their engaging use of technology and design, and he prides himself on his rigorous use of research and data to back up his claims and insights.Selected follow-ups:https://www.futuristgerd.com/https://www.futuristgerd.com/sharing/thegoodfuturefilm/Topics in this conversation include:*) The need for a positive antidote to all the negative visions of the future that are often in people's minds*) People, planet, purpose, and prosperity - rather than an over-focus on profit and economic growth*) Anticipating stock markets that work differently, and with additional requirements before dividends can be paid*) A reason to be an optimist: not because we have less problems (we don't), but because we have more capacity to deal with these problems*) From "capitalism" to "progressive capitalism" (another name could be "social capitalism")*) Kevin Kelly's concept of "protopia" as a contrast to both utopia and dystopia*) Too much of a good thing can be... a bad thing*) How governments and the state interact with free markets*) Managers who try to prioritise people, planet, or purpose (rather than profits and dividends) are "whacked by the stock market"*) The example of the Montreal protocol regarding the hole in the ozone layer, when governments gave a strong direction to the chemical industry*) Some questions about people, planet, purpose, and prosperity are relatively straightforward, but others are much more contested*) Conflicting motivations within high tech firms regarding speed-to-market vs. safety*) Controlling the spread of potentially dangerous AI may be much harder than controlling the spread of nuclear weapons technology, especially as costs reduce for AI development and deployment*) Despite geopolitical tensions, different countries are already collaborating behind the scenes on matters of AGI safety*) How much "financial freedom" should the definition of a good future embrace?*) Universal Basic Income and "the Star Trek economy" as potential responses to the Economic Singularity*) Differing assessments of the role of transhumanism in the good future*) Risks when humans become overly dependent on technology*) Most modern humans can't make a fire from scratch: does that matter?*) The Carrington Event of 1859: the most intense geomagnetic storm in recorded history*) How views changed in the 19th century about giving anaesthetics to women to counter the (biblically mandated?) intense pains of childbirth*) Will views change in a similar way about the possibility of external wombs (ectogenesis)?*) Jamie Bartlett's concept of "the moral singularity" when humans lose the ability to take hard decisions*) Can AI provide useful advice about human-human relationships?*) Is everything truly important about humans located in our minds?Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Francesca Rossi. Francesca studied computer science at the University of Pisa in Italy, where she became a professor, before spending 20 years at the University of Padova. In 2015 she joined IBM's T.J. Watson Research Lab in New York, where she is now an IBM Fellow and also IBM's AI Ethics Global Leader.Francesca is a member of numerous international bodies concerned with the beneficial use of AI, including being a board member at the Partnership on AI, a Steering Committee member and designated expert at the Global Partnership on AI, a member of the scientific advisory board of the Future of Life Institute, and Chair of the international conference on Artificial Intelligence, Ethics, and Society which is being held in Montreal in August this year.From 2022 until 2024 she holds the prestigious role of the President of the AAAI, that is, the Association for the Advancement of Artificial Intelligence. The AAAI has recently held its annual conference, and in this episode, Francesca shares some reflections on what happened there.Selected follow-ups:https://researcher.watson.ibm.com/researcher/view.php?person=ibm-Francesca.Rossi2https://en.wikipedia.org/wiki/Francesca_Rossihttps://partnershiponai.org/https://gpai.ai/Topics in this conversation include:*) How a one-year sabbatical at the Harvard Radcliffe Institute changed the trajectory of Francesca's life*) New generative AI systems such as ChatGPT expand previous issues involving bias, privacy, copyright, and content moderation - because they are trained on very large data sets that have not been curated*) Large language models (LLMs) have been optimised, not for "factuality", but for creating language that is syntactically correct*) Compared to previous AIs, the new systems impact a wider range of occupations, and they also have major implications for education*) Are the "AI ethics" and "responsible AI" approaches that address the issues of existing AI systems also the best approaches for the "AI alignment" and "AI safety" issues raised by artificial general intelligence?*) Different ideas on how future LLMs could acquire mastery, not only over language, but also over logic, inference, and reasoning*) Options for combining classical AI techniques focussing on knowledge and reasoning, with the data-intensive approaches of LLMs*) How "foundation models" allow training to be split into two phases, with a shorter supervised phase customising the output from a prior longer unsupervised phase*) Even experts face the temptation to anthropomorphise the behaviour of LLMs*) On the other hand, unexpected capabilities have emerged within LLMs*) The interplay of "thinking fast" and "thinking slow" - adapting, for the context of AI, insights from Daniel Kahneman about human intelligence*) Cross-fertilisation of ideas from different communities at the recent AAAI conference*) An extension of that "bridge" theme to involve ideas from outside of AI itself, including the use of methods of physics to observe and interpret LLMs from the outside*) Prospects for interpretability, explainability, and transparency of AI - and implications for trust and cooperation between humans and AIs*) The roles played by different international bodies, such as PAI and GPAI*) Pros and cons of including China in the initial phase of GPAI*) Designing regulations to be future-proof, with parts that can change quickly*) An important new goal for AI experts*) A vision for the next 3-5 yearsMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode, Tim Clement-Jones brings us up to date on the reactions by members of the UK's House of Commons to recent advances in the capabilities of AI systems, such as ChatGPT. He also looks ahead to larger changes, in the UK and elsewhere.Lord Clement-Jones CBE, or Tim, as he prefers to be known, has been a very successful lawyer, holding senior positions at ITV and Kingfisher among others, and later becoming London Managing Partner of law firm DLA Piper.He is better known as a politician. He became a life peer in 1998, and has been the Liberal Democrats’ spokesman on a wide range of issues. The reason we are delighted to have him as a guest on the podcast is that he was the chair of the AI Select Committee, Co-Chair of the All-Party Parliamentary Group on AI, and is now a member of a special inquiry on the use of AI in Weapons Systems.Tim also has multiple connections with universities and charities in the UK.Selected follow-up reading:https://www.lordclementjones.org/https://www.parallelparliament.co.uk/APPG/artificial-intelligencehttps://arcs.qmul.ac.uk/governance/council/council-membership/timclement-jones.htmlTopics in this conversation include:*) Does "the Westminster bubble" understand the importance of AI?*) Evidence that "the tide is turning" - MPs are demonstrating a spirit of inquiry*) The example of Sir Peter Bottomley, the Father of the House (who has been an MP continuously since 1975)*) New AI systems are showing characteristics that had not been expected to arrive for another 5 or 10 years, taking even AI experts by surprise*) The AI duopoly (the US and China) and the possible influence of the UK and the EU*) The forthcoming EU AI Act and the risk-based approach it embodies*) The importance of regulatory systems being innovation-friendly*) How might the EU support the development of some European AI tech giants?*) The inevitability(?) of the UK needing to become "a rule taker"*) Cynical and uncynical explanations for why major tech companies support EU AI regulation*) The example of AI-powered facial recognition: benefits and risks*) Is Brexit helping or hindering the UK's AI activities?*) Complications with the funding of AI research in the UK's universities*) The risks of a slow-down in the UK's AI start-up ecosystem*) Looking further afield: AI ambitions in the UAE and Saudi Arabia*) The particular risks of lethal autonomous weapons systems*) Future conflicts between AI-controlled tanks and human-controlled tanks*) Forecasts for the arrival of artificial general intelligence: 10-15 years from now?*) Superintelligence may emerge from a combination of separate AI systems*) The case for "technology-neutral" regulationMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Advanced AI is currently pretty much a duopoly between the USA and China. The US is the clear leader, thanks largely to its tech giants – Google, Meta, Microsoft, Amazon, and Apple. China also has a fistful of tech giants – Baidu, Alibaba, and Tencent are the ones usually listed, but the Chinese government has also taken a strong interest in AI since Deep Mind’s Alpha Go system beat the world’s best Go player in 2016.People in the West don’t know enough about China’s current and future role in AI. Some think its companies just copy their Western counterparts, while others think it is an implacable and increasingly dangerous enemy, run by a dictator who cares nothing for his people. Both those views are wrong.One person who has been trying to provide a more accurate picture of China and AI in recent years is Jeff Ding, the author of the influential newsletter ChinAI.Jeff grew up in Iowa City and is now an Assistant Professor of Political Science at George Washington University. He earned a PhD at Oxford University, where he was a Rhodes Scholar, and wrote his thesis on how past technological revolutions influenced the rise and fall of great powers, with implications for U.S.-China competition. After gaining his doctorate he worked at Oxford’s Future of Humanity Institute and Stanford’s Institute for Human-Centered Artificial Intelligence.Selected follow-up reading:https://jeffreyjding.github.io/https://chinai.substack.com/https://www.tortoisemedia.com/intelligence/global-ai/Topics in this conversation include:*) The Thucydides Trap: Is conflict inevitable as a rising geopolitical power approaches parity with an established power?*) Different ways of trying to assess how China's AI industry compares with that of the U.S.*) Measuring innovations in creating AI is different from measuring adoption of AI solutions across multiple industries*) Comparisons of papers submitted to AI conferences such as NeurIPS, citations, patents granted, and the number of data scientists*) The biggest misconceptions westerners have about China and AI*) A way in which Europe could still be an important player alongside the duopoly*) Attitudes in China toward data privacy and facial recognition*) Government focus on AI can be counterproductive*) Varieties of government industrial policy: the merits of encouraging decentralised innovation*) The Titanic and the origin of Silicon Valley*) Mariana Mazzucato's question: "Who created the iPhone?"*) Learning from the failure of Japan's 5th Generation Computers initiative*) The evolution of China's Social Credit systems*) Research by Shazeda Ahmed and Jeremy Daum*) Factors encouraging and discouraging the "splinternet" separation of US and Chinese tech ecosystems*) Connections that typically happen outside of the public eye*) Financial interdependencies*) Changing Chinese government attitudes toward Chinese Internet giants*) A broader tension faced by the Chinese government*) Future scenarios: potential good and bad developments*) Transnational projects to prevent accidents or unauthorised use of powerful AI systemsMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Peter James is one of the world’s most successful crime writers. His "Roy Grace" series, about a detective in Brighton, England, near where Peter lives, has produced a remarkable 19 consecutive Sunday Times Number One bestsellers. His legions of devoted fans await each new release eagerly. The books have been televised, with the third series of "Grace", starting John Simm, being commissioned for next year.Peter has worked in other genres too, having written 36 novels altogether. When Calum first met Peter in the mid-1990s, Peter's science fiction novel “Host” was generating rave reviews. It was the world’s first electronically published novel, and a copy of its floppy disc version is on display in London’s Science Museum.Peter is also a self-confessed petrol-head, with an enviable collection of classic cars, and a pretty successful track record of racing some of them. The discussion later in the episode addresses the likely arrival of self-driving cars. But we start with the possibility of mind uploading, which is the subject of “Host”.Selected follow-up reading:https://www.peterjames.com/https://www.alcor.org/Topics in this conversation include:*) Peter's passion for the future*) The transformative effect of the 1990 book "Great Mambo Chicken and the Transhuman Condition"*) A Christmas sojourn at MIT and encounters with AI pioneer Marvin Minsky*) The origins of the ideas behind "Host"*) Meeting Alcor, the cryonics organisation, in Riverside California*) How cryonics has evolved over the decades*) "The first person to live to 200 has already been born"*) Quick summaries of previous London Futurists Podcast episodes featuring Aubrey de Grey and Andrew Steele*) The case for doing better than nature*) Peter's novel "Perfect People" and the theme of "designer babies"*) Possible improvements in the human condition from genetic editing*) The risk of a future "genetic underclass"*) Technology divides often don't last: consider the "fridge divide" and the "smartphone divide"*) Calum's novel "Pandora's Brain"*) Why Peter is comfortable with the label "transhumanist"*) Various ways of reading (many) more books*) A thought experiment involving a healthy 99 year old*) If people lived a lot longer, we might take better care of our planet*) Peter's views on technology assisting writers*) Strengths and weaknesses of present-day ChatGPT as a writer*) Prospects for transhumans to explore space*) The "bunker experiments" into the circadian cycle, which suggest that humans naturally revert to a daily cycle closer to 26 hours than 24 hours*) Possible answers to Fermi's question about lack of any sign of alien civilisations*) Reflections on "The Pale Blue Dot of Earth" (originally by Carl Sagan)*) The likelihood of incredible surprises in the next few decades*) Pros and cons of humans driving on public roads (especially when drivers are using mobile phones)*) Legal and ethical issues arising from autonomous cars*) Exponential change often involves a frustrating slow phase before fast breakthroughs*) Anticipating the experience of driving inside immersive virtual reality*) The tragic background to Peter's book "Possession"*) A concluding message from the science fiction writer Kurt VonnegutMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is a Briton who is based in Berlin, namely Andrew Steele. Earlier in his life Andrew spent nine years at the University of Oxford where, among other accomplishments, he gained a PhD in physics. His focus switched to computational biology, and he held positions at Cancer Research UK and the Francis Crick Institute.Along the way, Andrew decided that aging was the single most important scientific challenge of our time. This led him to write the book "Ageless: The New Science of Getting Older Without Getting Old". There are a lot of books these days about the science of slowing, stopping, and even reversing aging, but Andrew's book is perhaps the best general scientific introduction to this whole field.Selected follow-ups:https://andrewsteele.co.uk/https://www.youtube.com/DrAndrewSteelehttps://ageless.link/Topics in this conversation include:*) The background that led Andrew to write his book "Ageless"*) A graph that changed a career*) The chance of someone dying in the next year doubles every eight years they live*) For tens of thousand of years, human life expectancy didn't change *) In recent centuries, the background mortality rate has significantly decreased, but the eight year "Gompertz curve" doubling of mortality remains unchanged*) Some animals do not have this mortality doubling characteristic; they are said to be "negligibly senescent", "biologically immortal", or "ageless"*) An example: Galapagos tortoises*) The concept of "hallmarks of aging" - and different lists of these hallmarks*) Theories of aging: wear-and-tear vs. programmed obsolescence*) Evolution and aging: two different strategies that species can adopt*) Wear-and-tear of teeth - as seen from a programmed aging point-of-view*) The case for a pragmatic approach*) Dietary restriction and healthier aging*) The potential of computational biology system models to generate better understanding of linkages between different hallmarks of aging*) Might some hallmarks, for example telomere shortening or epigenetic damage, prove more fundamental than others?*) Special challenges posed by damage in the proteins in the scaffolding between cells*) What's required to accelerate the advent of "longevity escape velocity"*) Excitement and questions over the funding available to Altos Labs*) Measuring timescales in research dollars rather than years*) Reasons for optimism for treatments of some of the hallmarks, for example with senolytics, but others aren't being properly addressed*) Breakthrough progress with the remaining hallmarks could be achieved with $5-10B investment each*) Adding some extra for potential unforeseen hallmarks, that sums to a total of around $100B before therapies for all aspects of aging could be in major clinical trials*) Why such an expenditure is in principle relatively easily affordable*) Reflections on moral and ethical objections to treatments against aging*) Overpopulation, environmental strains, resource sustainability, and net zero impact*) Aging as the single largest cause of death in the world - in all countries*) Andrew's current and forthcoming projects, including a book on options for funding science with the biggest impact*) Looking forward to "being more tortoise".Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
It is nearly 40 years since our guest in this episode, pioneering transhumanist Natasha Vita-More, created the first version of the Transhumanist Manifesto. Since that time, Natasha has established numerous core perspectives, values, and actions in the global transhumanist family.Natasha joins us in this episode to share her observations on how transhumanism has evolved over the decades, and to reflect on her work in building the movement—from practice-based approaches, scientific contributions, and theoretical innovations.Areas we explore include: How has Natasha's work seeded the global growth of transhumanism? What are the main advances over the years that she particularly values? And what are the disappointments?We also look to the future: What are her hopes and expectations for the next ten years of transhumanism?Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSelected follow-up reading:https://natashavita-more.com/https://www.fightaging.org/archives/2004/02/vital-progress-summit/http://www.extropy.org/proactionaryprinciple.htmhttps://metanexus.net/transhumanism-and-its-critics/https://whatistranshumanism.org/https://www.alcor.org/library/persistence-of-long-term-memory-in-vitrified-and-revived-simple-animals/https://waitbutwhy.com/2016/03/cryonics.htmlF. M. Esfandiary: https://archives.nypl.org/mss/4846https://www.maxmore.com/The World’s Most Dangerous Idea? https://nickbostrom.com/papers/dangeroushttps://theconversation.com/the-end-of-history-francis-fukuyamas-controversial-idea-explained-193225https://www.humanityplus.org/https://transhumanist-studies.teachable.com/Anyone Can Code, Ethiopia: https://icogacc.com/https://afrolongevity.taffds.org/
Our guest in this episode is the scientist and science fiction author Davin Brin, whose writings have won the Hugo, Locus, Campbell, and Nebula Awards. His style is sometimes called 'hard science fiction'. This means his narratives feature scientific or technological change that is plausible rather than purely magical. The scenarios he creates are thought-provoking as well as entertaining. His writing inspires readers but also challenges them, with important questions not just about the future, but also about the present.Perhaps his most famous non-fiction work is his book "The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom?", first published in 1998. With each passing year it seems that the questions and solutions raised in that book are becoming ever more pressing. One aspect of this has been called Brin's Corollary to Moore's Law: Every year, the cameras will get smaller, cheaper, more numerous and more mobile.David also frequently writes online about topics such as space exploration, attempts to contact aliens, homeland security, the influence of science fiction on society and culture, the future of democracy, and much more besides.Topics discussed in this conversation include:*) Reactions to reports of flying saucers*) Why photographs of UFOs remain blurry*) Similarities between reports of UFOs and, in prior times, reports of elves*) Replicating UFO phenomena with cat lasers*) Changes in attitudes by senior members of the US military*) Appraisals of the Mars Rovers*) Pros and cons of additional human visits to the moon*) Why alien probes might be monitoring this solar system from the asteroid belt*) Investigations of "moonlets" in Earth orbit*) Looking for pi in the sky*) Reasons why life might be widespread in the galaxy - but why life intelligent enough to launch spacecraft may be rare*) Varieties of animal intelligence: How special are humans?*) Humans vs. Neanderthals: rounds one and two*) The challenges of writing about a world that includes superintelligence*) Kurzweil-style hybridisation and Mormon theology*) Who should we admire most: lone heroes or citizens?*) Benefits of reciprocal accountability and mutual monitoring (sousveillance)*) Human nature: Delusions, charlatans, and incantations*) The great catechism of science*) Two levels at which the ideas of a transparent society can operate*) "Asimov's Laws of Robotics won't work"*) How AIs might be kept in check by other AIs*) The importance of presenting gedanken experimentsFiction mentioned (written by David Brin unless noted otherwise):The Three-Body Problem (Liu Cixin)ExistenceThe Sentinel (Arthur C. Clarke)Startide RisingThe Uplift WarKiln PeopleThe Culture Series (Iain M. Banks)The Expanse (James S.A. Corey)The Postman (the book and the film)Stones of SignificanceFahrenheit 451 (Ray Bradbury)Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSelected follow-up reading:http://www.davidbrin.com/http://davidbrin.blogspot.com/2021/07/whats-really-up-with-uaps-ufos.html
OpenAI's ChatGPT and picture generating AI systems like MidJourney and Stable Diffusion have got a lot more people interested in advanced AI and talking about it. Which is a good thing. It will not be pretty if the transformative changes that will happen in the next two or three decades take most of us by surprise.A company that has been pioneering advanced AI for longer than most is IBM, and we are very fortunate to have with us in this episode one of IBM’s most senior executives.Alessandro Curioni has been with the company for 25 years. He is an IBM Fellow, Director of IBM Research, and Vice President for Europe and Africa.Topics discussed in this conversation include:*) Some background: 70 years of inventing the future of computing*) The role of grand challenges to test and advance the world of AI*) Two major changes in AI: from rules-based to trained, and from training using annotated data to self-supervised training using non-annotated data*) Factors which have allowed self-supervised training to build large useful models, as opposed to an unstable cascade of mistaken assumptions*) Foundation models that extend beyond text to other types of structured data, including software code, the reactions of organic chemistry, and data streams generated from industrial processes*) Moving from relatively shallow general foundation models to models that can hold deep knowledge about particular subjects*) Identification and removal of bias in foundation models*) Two methods to create models tailored to the needs of particular enterprises*) The modification by RLHF (Reinforcement Learning from Human Feedback) of models created by self-supervised learning*) Examples of new business opportunities enabled by foundation models*) Three "neuromorphic" methods to significantly improve the energy efficiency of AI systems:  chips with varying precision, memory and computation co-located, and spiking neural networks*) The vulnerability of existing confidential data to being decrypted in the relatively near future*) The development and adoption of quantum-safe encryption algorithms*) What a recent "quantum apocalypse" paper highlights as potential future developments*) Changing forecasts of the capabilities of quantum computing*) IBM's attitude toward Artificial General Intelligence and the Turing Test*) IBM's overall goals with AI, and the selection of future "IBM Grand Challenges" in support of these goals*) Augmenting the capabilities of scientists to accelerate breakthrough scientific discoveries.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSelected follow-up reading:https://researcher.ibm.com/researcher/view.php?person=zurich-curhttps://www.zurich.ibm.com/st/neuromorphic/https://www.nist.gov/news-events/news/2022/07/nist-announces-first-four-quantum-resistant-cryptographic-algorithms
Quantum computing is a tough subject to explain and discuss. As Niels Bohr put it, “Anyone who is not shocked by quantum theory has not understood it”. Richard Feynman helpfully added, “I think I can safely say that nobody understands quantum mechanics”.Quantum computing employs the weird properties of quantum mechanics like superposition and entanglement. Classical computing uses binary digits, or bits, which are either on or off. Quantum computing uses qubits, which can be both on and off at the same time, and this characteristic somehow makes them enormously more computationally powerful.Co-hosts Calum and David knew that to address this important but difficult subject, we needed an absolute expert, who was capable of explaining it in lay terms. When Calum heard Dr Ignacio Cirac give a talk on the subject in Madrid last month, he knew we had found our man.Ignacio is director of the Max Planck Institute of Quantum Optics in Germany, and holds honorary and visiting professorships pretty much everywhere that serious work is done on quantum physics. He has done seminal work on the trapped ion approach to quantum computing and several other aspects of the field, and has published almost 500 papers in prestigious journals. He is spoken of as a possible Nobel Prize winner.Topics discussed in this conversation include:*) A brief history of quantum computing (QC) from the 1990s to the present*) The kinds of computation where QC can out-perform classical computers*) Likely timescales for further progress in the field*) Potential quantum analogies of Moore's Law*) Physical qubits contrasted with logical qubits*) Reasons why errors often arise with qubits - and approaches to reducing these errors*) Different approaches to the hardware platforms of QC - and which are most likely to prove successful*) Ways in which academia can compete with (and complement) large technology companies*) The significance of "quantum supremacy" or "quantum advantage": what has been achieved already, and what might be achieved in the future*) The risks of a forthcoming "quantum computing winter", similar to the AI winters in which funding was reduced*) Other comparisons and connections between AI and QC*) The case for keeping an open mind, and for supporting diverse approaches, regarding QC platforms*) Assessing the threats posed by Shor's algorithm and fault-tolerant QC*) Why companies should already be considering changing the encryption systems that are intended to keep their data secure*) Advice on how companies can build and manage in-house "quantum teams"Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSelected follow-up reading:https://en.wikipedia.org/wiki/Juan_Ignacio_Cirac_Sasturainhttps://en.wikipedia.org/wiki/Rydberg_atom
In the summer of 1950, the physicist Enrico Fermi and some colleagues at the Los Alamos Lab in New Mexico were walking to lunch, and casually discussing flying saucers, when Fermi blurted out “But where is everybody?” He was not the first to pose the question, and the precise phrasing is disputed, but the mystery he was referring to remains compelling.We appear to live in a vast universe, with billions of galaxies, each with billions of stars, mostly surrounded by planets, including many like the Earth. The universe appears to be 13.7 billion years old, and even if intelligent life requires an Earth-like planet, and even if it can only travel and communicate at the speed of light, we ought to see lots of evidence of intelligent life. But we don’t. No beams of light from stars occluded by artificial satellites spelling out pi. No signs of galactic-scale engineering. No clear evidence of little green men demanding to meet our leaders.Numerous explanations have been advanced to explain this discrepancy, and one man who has spent more brainpower than most exploring them is the always-fascinating Anders Sandberg. Anders is a computational neuroscientist who got waylaid by philosophy, which he pursues at Oxford University, where he is a senior research fellow.Topics in this episode include:* The Drake equation for estimating the number of active, communicative extraterrestrial civilizations in our galaxy* Changes in recent decades in estimates of some of the factors in the Drake equation* The amount of time it would take self-replicating space probes to spread across the galaxy* The Dark Forest hypothesis - that all extraterrestrial civilizations are deliberately quiet, out of fear* The likelihood of extraterrestrial civilizations emitting observable signs of their existence, even if they try to suppress them* The implausibility of all extraterrestrial civilizations converging to the same set of practices, rather than at least some acting in ways where we would notice their existence - and a counter argument* The possibility of civilisations opting to spend all their time inside virtual reality computers located in deep interstellar space* The Aestivation hypothesis, in which extraterrestrial civilizations put themselves into a "pause" mode until the background temperature of the universe has become much lower* The Quarantine or Zoo hypothesis, in which extraterrestrial civilizations are deliberately shielding their existence from an immature civilization like ours* The Great Filter hypothesis, in which life on other planets has a high probability, either of failing to progress to the level of space-travel, or of failing to exist for long after attaining the ability to self-destruct* Possible examples of "great filters"* Should we hope to find signs of life on Mars?* The Simulation hypothesis, in which the universe is itself a kind of video game, created by simulators, who had no need (or lacked sufficient resources) to create more than one intelligent civilization* Implications of this discussion for the wisdom of the METI project - Messaging to Extraterrestrial IntelligenceSelected follow-up reading:* Anders' website at FHI Oxford: https://www.fhi.ox.ac.uk/team/anders-sandberg/* The Great Filter, by Robin Hanson: http://mason.gmu.edu/~rhanson/greatfilter.html* "Seventy-Five Solutions to the Fermi Paradox and the Problem of Extraterrestrial Life" - a book by Stephen Webb: https://link.springer.com/book/10.1007/978-3-319-13236-5* The aestivation hypothesis: https://www.fhi.ox.ac.uk/aestivation-hypothesis-resolving-fermis-paradox/* Should We Message ET? by David Brin: http://www.davidbrin.com/nonfiction/meti.html
An area of technology that has long been anticipated is Extended Reality (XR), which includes Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). For many decades, researchers have developed various experimental headsets, glasses, gloves, and even immersive suits, to give wearers of these devices the impression of existing within a reality that is broader than what our senses usually perceive. More recently, a number of actual devices have come to the market, with, let's say it, mixed reactions. Some enthusiasts predict rapid improvements in the years ahead, whereas other reviewers focus on disappointing aspects of device performance and user experience.Our guest in this episode of London Futurists Podcast is someone widely respected as a wise guide in this rather turbulent area. He is Steve Dann, who among other roles is the lead organiser of the highly popular Augmenting Reality meetup in London.Topics discussed in this episode include:*) Steve's background in film and television special effects*) The different forms of Extended Reality*) Changes in public understanding of virtual and augmented reality*) What can be learned from past disappointments in this field*) Prospects for forthcoming tipping points in market adoption*) Comparisons with the market adoption of smartwatches and of smartphones*) Forecasting incremental improvements in key XR technologies*) Why "VR social media" won't be a sufficient reason for mass adoption of VR*) The need for compelling content*) The particular significance of enterprise use cases*) The potential uses of XR in training, especially for medical professionals*) Different AR and VR use cases in medical training - and different adoption timelines*) Why an alleged drawback of VR may prove to be a decisive advantage for it*) The likely forthcoming battle over words such as "metaverse"*) Why our future online experiences will increasingly be 3D*) Prospects for open standards between different metaverses*) Reasons for companies to avoid rushing to purchase real estate in metaverses*) Movies that portray XR, and the psychological perception of "what is real"*) Examples of powerful real-world consequences of VR experiences.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSelected follow-up reading:https://www.meetup.com/augmenting-reality/https://www.medicalrealities.com/about
Our guest on this episode is someone with excellent connections to the foresight departments of governments around the world. He is Jerome Glenn, Founder and Executive Director of the Millennium Project.The Millennium Project is a global participatory think tank established in 1996, which now has over 70 nodes around the world. It has the stated purpose to "Improve humanity's prospects for building a better world". The organisation produces regular "State of the Future" reports as well as updates on what it describes as "the 15 Global Challenges". It recently released an acclaimed report on three scenarios for the future of work. One of its new projects is the main topic in this episode, namely scenarios for the global governance of the transition from Artificial Narrow Intelligence (ANI) to Artificial General Intelligence (AGI).Topics discussed in this episode include:*) Why many futurists are jealous of Alvin Toffler*) The benefits of a decentralised, incremental approach to foresight studies*) Special features of the Millennium Project compared to other think tanks*) How the Information Revolution differs from the Industrial Revolution*) What is likely to happen if there is no governance of the transition to AGI*) Comparisons with regulating the use of cars - and the use of nuclear materials*) Options for licensing, auditing, and monitoring*) How the development of a technology may be governed even if it has few visible signs*) Three options: "Hope", "Control", and "Merge" - but all face problems; in all three cases, getting the initial conditions right could make a huge difference*) Distinctions between AGI and ASI (Artificial Superintelligence), and whether an ASI could act in defiance of its initial conditions*) Controlling AGI is likely to be impossible, but controlling the companies that are creating AGI is more credible*) How actions taken by the EU might influence decisions elsewhere in the world*) Options for "aligning" AGI as opposed to "controlling" it*) Complications with the use of advanced AI by organised crime and by rogue states*) The poor level of understanding of most political advisors about AGI, and their tendency to push discussions back to the issues of ANI*) Risks of catastrophic social destabilisation if "the mother of all panics" about AGI occurs on top of existing culture wars and political tribalism*) Past examples of progress with technologies that initially seemed impossible to govern*) The importance of taking some initial steps forward, rather than being overwhelmed by the scale of the challenge.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSelected follow-up reading:https://en.wikipedia.org/wiki/Jerome_C._Glennhttps://www.millennium-project.org/https://www.millennium-project.org/first-steps-for-artificial-general-intelligence-governance-study-have-begun/The 2020 book "After Shock: The World's Foremost Futurists Reflect on 50 Years of Future Shock - and Look Ahead to the Next 50"
This episode features the CEO of Brainnwave, Steven Coates, who is a pioneer in the field of Decision Intelligence.Decision Intelligence is the use of AI to enhance the ability of companies, organisations, or individuals to make key decisions - decisions about which new business opportunities to pursue, about evidence of possible leakage or waste, about the allocation of personnel to tasks, about geographical areas to target, and so on.What these decisions have in common is that they can all be improved by the analysis of large sets of data that defy attempts to reduce them to a single dimension. In these cases, AI systems that are suited to multi-dimensional analysis can make all the difference between wise and unwise decisions.Topics discussed in this episode include:*) The ideas initially pursued at Brainnwave, and how they evolved over time*) Real-world examples of Decision Intelligence - in the mining industry, the supply of mobile power generators, and in the oil industry*) Recommendations for businesses to focus on Decision Intelligence as they adopt fuller use of AI, on account of the direct impact on business outcomes*) Factors holding up the wider adoption of AI*) Challenges when "data lakes" turn into "data swamps"*) Challenges with the limits of trust that can be placed in data*) Challenges with the lack of trust in algorithms*) Skills in explaining how algorithms are reaching their decisions*) The benefits of an agile mindset in introducing Decision Intelligence.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSome follow-up reading:https://brainnwave.ai/
As AI automates larger portions of the activities of companies and organisations, there's a greater need to think carefully about questions of privacy, bias, transparency, and explainability. Due to scale effects, mistakes made by AI and the automated analysis of data can have wide impacts. On the other hand, evidence of effective governance of AI development can deepen trust and accelerate the adoption of significant innovations.One person who has thought a great deal about these issues is Ray Eitel-Porter, Global Lead for Responsible AI at Accenture. In this episode of the London Futurist Podcast, he explains what conclusions he has reached.Topics discussed include:*) The meaning and importance of "Responsible AI"*) Connections and contrasts with "AI ethics" and "AI safety"*) The advantages of formal AI governance processes*) Recommendations for the operation of an AI ethics board*) Anticipating the operation of the EU's AI Act*) How different intuitions of fairness can produce divergent results*) Examples where transparency has been limited*) The potential future evolution of the discipline of Responsible AI.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSome follow-up reading:https://www.accenture.com/gb-en/services/applied-intelligence/ai-ethics-governance
One area of technology that is frequently in the news these days is rejuvenation biotechnology, namely the possibility of undoing key aspects of biological aging via a suite of medical interventions. What these interventions target isn't individual diseases, such as cancer, stroke, or heart disease, but rather the common aggravating factors that lie behind the increasing prevalence of these diseases as we become older.Our guest in this episode is someone who has been at the forefront for over 20 years of a series of breakthrough initiatives in this field of rejuvenation biotechnology. He is Dr Aubrey de Grey, co-founder of the Methuselah Foundation, the SENS Research Foundation, and, most recently, the LEV Foundation - where 'LEV' stands for Longevity Escape Velocity.Topics discussed include:*) Different concepts of aging and damage repair;*) Why the outlook for damage repair is significantly more tangible today than it was ten years ago;*) The role of foundations in supporting projects which cannot receive funding from commercial ventures;*) Questions of pace of development: cautious versus bold;*) Changing timescales for the likely attainment of robust mouse rejuvenation ('RMR') and longevity escape velocity ('LEV');*) The "Less Death" initiative;*) "Anticipating anticipation" - preparing for likely sweeping changes in public attitude once understanding spreads about the forthcoming available of powerful rejuvenation treatments;*) Various advocacy initiatives that Aubrey is supporting;*) Ways in which listeners can help to accelerate the attainment of LEV.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSome follow-up reading:https://levf.orghttps://lessdeath.org
A Venn diagram of people interested in how AI will shape our future, and members of the effective altruism community (often abbreviated to EA), would show a lot of overlap. One of the rising stars in this overlap is our guest in this episode, the polymath Jacy Reese Anthis.Our discussion picks up themes from Jacy's 2018 book “The End of Animal Farming”, including an optimistic roadmap toward an animal-free food system, as well as factors that could alter that roadmap.We also hear about the work of an organisation co-founded by Jacy: the Sentience Institute, which researches - among other topics - the expansion of moral considerations to non-human entities. We discuss whether AIs can be sentient, how we might know if an AI is sentient, and whether the design choices made by developers of AI will influence the degree and type of sentience of AIs.The conversation concludes with some ideas about how various techniques can be used to boost personal effectiveness, and considers different ways in which people can relate to the EA community.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSome follow-up reading:https://www.sentienceinstitute.org/https://jacyanthis.com/
In the 4th century BC, the Greek philosopher Plato theorised that humans do not perceive the world as it really is. All we can see is shadows on a wall.In 2003, the Swedish philosopher Nick Bostrom published a paper which formalised an argument to prove Plato was right. The paper argued that one of the following three statements is true:1. We will go extinct fairly soon2. Advanced civilisations don’t produce simulations containing entities which think they are naturally-occurring sentient intelligences. (This could be because it is impossible.)3. We are in a simulation.The reason for this is that if it is possible, and civilisations can become advanced without exploding, then there will be vast numbers of simulations, and it is vanishingly unlikely that any randomly selected civilisation (like us) is a naturally-occurring one.Some people find this argument pretty convincing. As we will hear later, some of us have added twists to the argument. But some people go even further, and speculate about how we might bust out of the simulation.One such person is our friend and our guest in this episode, Roman Yampolskiy, Professor of Computer Science at the University of Louisville.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFurther reading:"How to Hack the Simulation" by Roman Yampolskiy: https://www.researchgate.net/publication/364811408_How_to_Hack_the_Simulation"The Simulation Argument" by Nick Bostrom: https://www.simulation-argument.com/
loading
Comments 
Download from Google Play
Download from App Store