DiscoverEye On A.I.
Eye On A.I.
Claim Ownership

Eye On A.I.

Author: Craig S. Smith

Subscribed: 501Played: 8,628
Share

Description

Eye on A.I. is a biweekly podcast, hosted by longtime New York Times correspondent Craig S. Smith. In each episode, Craig will talk to people making a difference in artificial intelligence. The podcast aims to put incremental advances into a broader context and consider the global implications of the developing technology. AI is about to change your world, so pay attention.
217 Episodes
Reverse
Welcome to episode 125 of Eye on AI, where we embark on a journey into the realm of Generative AI. In this episode, we have the pleasure of chatting with Pascal Weinberger, co-founder and CEO of Bardeen AI, who takes us through the evolution of AI and its incredible potential for creativity and professional endeavors. Join us as we venture behind the scenes of Telefonica’s Moonshot Lab, where AI projects in healthcare, energy, and city planning are explored. Discover the fascinating ideas and initiatives that have emerged, including the birth of a mental health company, as we uncover the immense impact of Generative AI. During our conversation, we’ll delve into the nuances of Generative AI technology, exploring how industry giants like Microsoft and Google are harnessing its power to enhance their products. We’ll also discuss the strategies and challenges faced by companies in the competitive Generative AI market, with a strong focus on meeting the needs of end users. We’ll also tackle the ongoing debates surrounding the risks and benefits of AI technology, ensuring you stay ahead of the curve in this ever-evolving world of Generative AI. Tune in and join us as we unravel the secrets of Generative AI, paving the way for a future where creativity and productivity reach new heights. (00:00) Preview (00:24) Pascal's Weinberger background in Telefonica (08:28) Machine learning & AI with Pascal's Weinberger  (10:28) How Pascal's Weinberger founded Bardeen AI (13:25) Generate AI MVP for Bardeen AI (17:21) Generative AI applications and OpenAI competition  (22:24) Competition in the AI space (25:24) Big tech companies vs. startups in AI (31:46) The future of AI and transformer algorithm   (32:41) Bardeen AI features and functionality  (46:24) AutoGPT problems and considerations   (50:54) Risk of AI & misuse of commands  Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
In this podcast, we sit down with Danny Tobey, an attorney with the global law firm DLA Piper, to discuss the changing legal dynamics surrounding artificial intelligence. As one of the leading experts in the field, Danny provides valuable insights into the current state of legislation and regulation, the efforts of regulatory bodies like the Federal Trade Commission in tackling issues related to AI, and how the law firm of the future will look as AI continues to transform the economy. With the growing impact of AI on all aspects of our lives, the legal profession is facing unique challenges and opportunities. Danny brings a wealth of knowledge and experience to the conversation, having worked with clients in industries ranging from healthcare to financial services to consumer products. Throughout the podcast, Danny explores the ethical and legal implications of AI, as well as the ways in which AI is already reshaping the legal industry. He provides thoughtful perspectives on how the legal profession can adapt and evolve to meet the demands of an AI-driven economy, and the role that lawyers and regulatory bodies will play in shaping the future of this transformative technology. Whether you're a legal professional looking to stay on top of the latest developments in AI, or simply interested in the ways that AI is changing the legal landscape, this podcast is sure to offer valuable insights and food for thought. So join us as we dive deep into the intersection of law and artificial intelligence with Danny Tobey. Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI
In this episode of the Eye on A.I. podcast, host Craig Smith interviews Yoshua Bengio, one of the founding fathers of deep learning and a Turing Award winner. Bengio shares his insights on the famous pause letter, which he signed along with other prominent A.I. researchers, calling for a more responsible approach to the development of A.I. technologies. He discusses the potential risks associated with increasingly powerful A.I. models and the importance of ensuring that models are developed in a way that aligns with our ethical values. Bengio also talks about his latest research on world models and inference machines, which aim to provide A.I. systems with the ability to reason for reality and make more informed decisions. He explains how these models are built and how they could be used in a variety of applications, such as autonomous vehicles and robotics. Throughout the podcast, Bengio emphasises the need for interdisciplinary collaboration and the importance of addressing the ethical implications of A.I. technologies. Don’t miss this insightful conversation with one of the most influential figures in A.I. on Eye on A.I. podcast! Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI
Welcome to the latest episode of our podcast featuring Edo Liberty, an AI expert and former creator of SageMaker at Amazon’s AI labs. In this episode, Edo discusses how his team at Pinecone.io is tackling the problem of hallucinations in large language models like ChatGPT. Edo’s approach involves using vector embeddings to create a long-term memory database for large language models. By converting authoritative and trusted information into vectors, and loading them into the database, the system provides a reliable source of information for large language models to draw from, reducing the likelihood of inaccurate responses. Throughout the episode, Edo explains the technical details of his approach and shares some of the potential applications for this technology, including AI systems that rely on language processing. Edo also discusses the future of AI and how this technology could revolutionise the way we interact with computers and machines. With his insights and expertise in the field, this episode is a must-listen for anyone interested in the latest developments in AI and language processing. We have a new sponsor this week: NetSuite by Oracle, a cloud-based enterprise resource planning software to help businesses of any size manage their financials, operations, and customer relationships in a single platform. They've just rolled out a terrific offer: you can defer payments for a full NetSuite implementation for six months. That's no payment and no interest for six months, and you can take advantage of this special financing offer today at netsuite.com/EYEONAI  Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI  
In this podcast episode, Ilya Sutskever, the co-founder and chief scientist at OpenAI, discusses his vision for the future of artificial intelligence (AI), including large language models like GPT-4. Sutskever starts by explaining the importance of AI research and how OpenAI is working to advance the field. He shares his views on the ethical considerations of AI development and the potential impact of AI on society. The conversation then moves on to large language models and their capabilities. Sutskever talks about the challenges of developing GPT-4 and the limitations of current models. He discusses the potential for large language models to generate a text that is indistinguishable from human writing and how this technology could be used in the future. Sutskever also shares his views on AI-aided democracy and how AI could help solve global problems such as climate change and poverty. He emphasises the importance of building AI systems that are transparent, ethical, and aligned with human values. Throughout the conversation, Sutskever provides insights into the current state of AI research, the challenges facing the field, and his vision for the future of AI. This podcast episode is a must-listen for anyone interested in the intersection of AI, language, and society. Timestamps: (00:04) Introduction of Craig Smith and Ilya Sutskever. (01:00) Sutskever's AI and consciousness interests. (02:30) Sutskever's start in machine learning with Hinton. (03:45) Realization about training large neural networks. (06:33) Convolutional neural network breakthroughs and imagenet. (08:36) Predicting the next thing for unsupervised learning. (10:24) Development of GPT-3 and scaling in deep learning. (11:42) Specific scaling in deep learning and potential discovery. (13:01) Small changes can have big impact. (13:46) Limits of large language models and lack of understanding. (14:32) Difficulty in discussing limits of language models. (15:13) Statistical regularities lead to better understanding of world. (16:33) Limitations of language models and hope for reinforcement learning. (17:52) Teaching neural nets through interaction with humans. (21:44) Multimodal understanding not necessary for language models. (25:28) Autoregressive transformers and high-dimensional distributions. (26:02) Autoregressive transformers work well on images. (27:09) Pixels represented like a string of text. (29:40) Large generative models learn compressed representations of real-world processes. (31:31) Human teachers needed to guide reinforcement learning process. (35:10) Opportunity to teach AI models more skills with less data. (39:57) Desirable to have democratic process for providing information. (41:15) Impossible to understand everything in complicated situations. Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI
In this episode, Ben Sorscher, a PhD student at Stanford, sheds light on the challenges posed by the ever-increasing size of data sets used to train machine learning models, specifically large language models. The sheer size of these data sets has been pushing the limits of scaling, as the cost of training and the environmental impact of the electricity they consume becomes increasingly enormous. As a solution, Ben discusses the concept of “data pruning” - a method of reducing the size of data sets without sacrificing model performance. Data pruning involves selecting the most important or representative data points and removing the rest, resulting in a smaller, more efficient data set that still produces accurate results. Throughout the podcast, Ben delves into the intricacies of data pruning, including the benefits and drawbacks of the technique, the practical considerations for implementing it in machine learning models, and the potential impact it could have on the field of artificial intelligence. Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI
In this episode, Yann LeCun, a renowned computer scientist and AI researcher, shares his insights on the limitations of large language models and how his new joint embedding predictive architecture could help bridge the gap. While large language models have made remarkable strides in natural language processing and understanding, they are still far from perfect. Yann LeCun points out that these models often cannot capture the nuances and complexities of language, leading to inaccuracies and errors. To address this gap, Yann LeCun introduces his new joint embedding predictive architecture - a novel approach to language modelling that combines techniques from computer vision and natural language processing. This approach involves jointly embedding text and images, allowing for more accurate predictions and a better understanding of the relationships between original concepts and objects. Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI
In this episode, Terry Sejnowski, an AI pioneer, chairman of the NeurIPS Foundation, and co-creator of Boltzmann Machines, delves into the latest developments in deep learning and their potential impact on our understanding of the human brain. Terry Sejnowski begins by discussing the NeurIPS conference - one of the most significant events in the field of artificial intelligence - and its role in advancing research and innovation in deep learning. He shares insights into the latest breakthroughs in the field, including the repurposing of the sleep-wake cycle of Boltzmann Machines in Geoff Hinton's new Forward-Forward algorithm. Throughout the episode, Terry Sejnowski shares his expertise on the intersection of artificial intelligence and neuroscience, exploring how advances in deep learning may help us better understand the complexities of the human brain. He discusses how researchers are using AI techniques to study brain activity and the potential implications for fields such as medicine and psychology. Overall, this episode will be of particular interest to those interested in the latest developments in artificial intelligence and their potential applications in neuroscience and related fields. Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI
In this episode, Geoffrey Hinton, a renowned computer scientist and a leading expert in deep learning, provides an in-depth exploration of his groundbreaking new learning algorithm - the forward-forward algorithm. Hinton argues this algorithm provides a more plausible model for how the cerebral cortex might learn, and could be the key to unlocking new possibilities in artificial intelligence. Throughout the episode, Hinton discusses the mechanics of the forward-forward algorithm, including how it differs from traditional deep learning models and what makes it more effective. He also provides insights into the potential applications of this new algorithm, such as enabling machines to perform tasks that were previously thought to be exclusive to human cognition. Hinton shares his thoughts on the current state of deep learning and its future prospects, particularly in neuroscience. He explores how advances in deep learning may help us gain a better understanding of our own brains and how we can use this knowledge to create more intelligent machines. Overall, this podcast provides a fascinating glimpse into the latest developments in artificial intelligence and the cutting-edge research being conducted by one of its leading pioneers. Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI
Setting the stage for 2023

Setting the stage for 2023

2023-01-0201:00:21

To set the stage for some terrific conversations I have coming to you in the new year, in this episode we go back to some earlier conversations that talk about how we got to where we are in deep learning and how those early threads continue to lead innovation.
This week I talk to Bob Rogers, a Harvard trained astrophysicist who once built digital twins of black holes to better understand them, and now builds digital twins of supply chains to help make them more efficient and resilient.
NO-CODE WITH AKKIO

NO-CODE WITH AKKIO

2022-10-2045:30

Jonathon Reilly, co-founder of Akkio, a no-code AI platform, talks about how users with a web browser and an idea have the power to bring AI to life themselves without having to write code.
MLOps with ClearML

MLOps with ClearML

2022-10-0542:50

Moses Guttmann, founded ClearML, talks about the evolution of the MLOps industry over the past few years and ClearML's contribution to it.
Amazon's Sagemaker

Amazon's Sagemaker

2022-09-2119:37

Bratin Saha, head of Amazon's machine learning services, talks about Amazon's growing dominance in model building and deploying AI, about the company's SageMaker platform, and whether anyone can compete with the behemoth. 
Peter Schrammel, one of the founders of Diffblue, an automated unit-test writing software company, speaks about the increasing automatic generation of code and how he sees such automation increasing the productivity of developers.
Michael Kearns, a computer scientist professor at the University of Pennsylvania and an Amazon scholar talks about differential privacy, how Amazon's research approach differs from its peers, and how AI will eventually permeate all aspects of our lives. 
XPRIZE TELEPORTATION

XPRIZE TELEPORTATION

2022-08-1036:59

Jacki Morie, a senior XPRIZE advisor, talks about the ANA Avatar XPRIZE, a competition focused on creating a physical avatar system that will seamlessly transport human skills and experience to distant locations. The four-year competition is in its final stretch.
VITAL & MINT

VITAL & MINT

2022-07-2841:08

Aaron Patzer, founder of the personal finance app MINT and more recently founder of the AI-based healthcare company Vital, talks about keeping customer data private and the promise, giving emergency room patients information with AI and finding friendly solutions to anxiety-producing problems.
Amazon's Rohit Prasad

Amazon's Rohit Prasad

2022-07-1532:08

Rohit Prasad, Amazon's Senior Vice President and Head Scientist for Alexa, speaks about the development of conversational AI and virtual assistants and the merging of IoT sensor data into ambient intelligence - AI that is always present and immediately accessible.
Amazon's Astro

Amazon's Astro

2022-06-2321:27

Ken Washington, who leads Amazon's consumer robotics team, talks about the company's compact wheeled robot called Astro. Ken talked about Astro's evolution, it's popular and possible use cases, and what might be in store in the future.     
loading
Comments (1)

Anthony Famularo

Full transparency of ALL private-sector AI systems. This might sound extreme, but no human to whom monetary profit is important absolutely can be trusted to behave ethically. Imagine a CEO instructing an AI to prioritize his personal wealth, or his company's share price, in the least detectable way possible. I truly don't think that advanced AI and capitalism are compatible.

Feb 8th
Reply