Discover
The Information Bottleneck
The Information Bottleneck
Author: Ravid Shwartz-Ziv & Allen Roush
Subscribed: 3Played: 3Subscribe
Share
© 2025 Ravid Shwartz-Ziv & Allen Roush
Description
Two AI Researchers - Ravid Shwartz Ziv, and Allen Roush, discuss the latest trends, news, and research within Generative AI, LLMs, GPUs, and Cloud Systems.
10 Episodes
Reverse
In this episode, we talked with Michael Bronstein, a professor of AI at the University of Oxford and a scientific director at AITHYRA, about the fascinating world of geometric deep learning. We explored how understanding the geometric structures in data can enhance the efficiency and accuracy of AI models. Michael shared insights on the limitations of small neural networks and the ongoing debate about the role of scaling in AI. We also talked about the future in scientific discovery, and the potential impact on fields like drug design and mathematics
In this episode we host Tal Kachman, an assistant professor at Radboud University, to explore the fascinating intersection of artificial intelligence and natural sciences. Prof. Kachman's research focuses on multiagent interaction, complex systems, and reinforcement learning. We dive deep into how AI is revolutionizing materials discovery, chemical dynamics modeling, and experimental design through self-driving laboratories. Prof. Kachman shares insights on the challenges of integrating physics and chemistry with AI systems, the critical role of high-throughput experimentation in accelerating scientific discovery, and the transformative potential of generative models to unlock new materials and functionalities.
In this episode, we talked with Ahmad Beirami, an ex-researcher at Google, to discuss various topics. We explored the complexities of reinforcement learning, its applications in LLMs, and the evaluation challenges in AI research. We also discussed the dynamics of academic conferences and the broken review system. Finally, we discussed how to integrate theory and practice in AI research and why the community should prioritize a deeper understanding over surface-level improvements.
In this episode of the "Information Bottleneck" podcast, we hosted Aran Nayeb, an assistant professor at Carnegie Mellon University, to discuss the intersection of computational neuroscience and machine learning. We talked about the challenges and opportunities in understanding intelligence through the lens of both biological and artificial systems. We talked about topics such as the evolution of neural networks, the role of intrinsic motivation in AI, and the future of brain-machine interfaces.
We talked with Ariel Noyman, an urban scientist, working in the intersection of cities and technology. Ariel is a research scientist at the MIT Media Lab, exploring novel methods of urban modeling and simulation using AI. We discussed the potential of virtual environments to enhance urban design processes, the challenges associated with them, and the future of utilizing AI. Links:TravelAgent: Generative agents in the built environment - https://journals.sagepub.com/doi/10.1177/23998083251360458Ariel Neumann's websites -https://www.arielnoyman.com/https://www.media.mit.edu/people/noyman/overview/
We discussed the inference optimization technique known as Speculative Decoding with a world class researcher, expert, and ex-coworker of the podcast hosts: Nadav Timor.Papers and links:Accelerating LLM Inference with Lossless Speculative Decoding Algorithms for Heterogeneous Vocabularies, Timor et al, ICML 2025, https://arxiv.org/abs/2502.05202Distributed Speculative Inference (DSI): Speculation Parallelism for Provably Faster Lossless Language Model Inference, Timor et al, ICLR, 2025, https://arxiv.org/abs/2405.14105Fast Inference from Transformers via Speculative Decoding, Leviathan et al, 2022, https://arxiv.org/abs/2502.05202FindPDFs - https://huggingface.co/datasets/HuggingFaceFW/finepdfs
In this episode, Ravid and Allen discuss the evolving landscape of AI coding. They explore the rise of AI-assisted development tools, the challenges faced in software engineering, and the potential future of AI in creative fields. The conversation highlights both the benefits and limitations of AI in coding, emphasizing the need for careful consideration of its impact on the industry and society.Chapters00:00Introduction to AI Coding and Recent Developments03:10OpenAI's Paper on Hallucinations in LLMs06:03Critique of OpenAI's Research Approach08:50Copyright Issues in AI Training Data12:00The Value of Data in AI Training14:50Watermarking AI Generated Content17:54The Future of AI Investment and Market Dynamics20:49AI Coding and Its Impact on Software Development31:36The Evolution of AI in Software Development33:54Vibe Coding: The Future or a Fad?38:24Navigating AI Tools: Personal Experiences and Challenges41:53The Limitations of AI in Complex Coding Tasks46:52Security Vulnerabilities in AI-Generated Code50:28The Role of Human Intuition in AI-Assisted Coding53:28The Impact of AI on Developer Productivity56:53The Future of AI in Creative Fields
Allen and Ravid discuss the dynamics associated with the extreme need for GPUs that AI researchers utilize. They also discuss the latest advancements in AI, including Google's Nano Banana and DeepSeek V3.1, exploring the implications of synthetic data, perplexity, and the influence of AI on human communication. They also delve into the challenges faced by AI researchers in the job market, the importance of GPU infrastructure, and a recent papers examining knowledge and reasoning in LLMs.
Allen and Ravid sit down and talk about Parameter Efficient Fine Tuning (PeFT) along with the latest updated in AI/ML news.
Allen and Ravid discuss a topic near and dear to their hearts, LLM Sampling!In this episode of the Information Bottleneck Podcast, Ravid Shwartz-Ziv and Alan Rausch discuss the latest developments in AI, focusing on the controversial release of GPT-5 and its implications for users. They explore the future of large language models and the importance of sampling techniques in AI. Chapters00:00 Introduction to the Information Bottleneck Podcast01:42 The GPT-5 Debacle: Expectations vs. Reality05:48 Shifting Paradigms in AI Research09:46 The Future of Large Language Models12:56 OpenAI's New Model: A Mixed Bag17:55 Corporate Dynamics in AI: Mergers and Acquisitions21:39 The GPU Monopoly: Challenges and Opportunities25:31 Deep Dive into Samplers in AI35:38 Innovations in Sampling Techniques42:31 Dynamic Sampling Methods and Their Implications51:50 Learning Samplers: A New Frontier59:51 Recent Papers and Their Impact on AI Research













