DiscoverAI Engineering Podcast
AI Engineering Podcast

AI Engineering Podcast

Author: Tobias Macey

Subscribed: 8,920Played: 245,504
Share

Description

This show is your guidebook to building scalable and maintainable AI systems. You will learn how to architect AI applications, apply AI to your work, and the considerations involved in building or customizing new models. Everything that you need to know to deliver real impact and value with machine learning and artificial intelligence.
59 Episodes
Reverse
SummaryIn this episode of the AI Engineering Podcast Jeremiah Lowin, founder and CEO of Prefect Technologies, talks about the FastMCP framework and the design of MCP servers. Jeremiah explains the evolution of FastMCP, from its initial creation as a simpler alternative to the MCP SDK to its current role in facilitating the deployment of AI tools. The discussion covers the complexities of designing MCP servers, the importance of context engineering, and the potential pitfalls of overwhelming AI agents with too many tools. Jeremiah also highlights the importance of simplicity and incremental adoption in software design, and shares insights into the future of MCP and the broader AI ecosystem. The episode concludes with a look at the challenges of authentication and authorization in AI applications and the exciting potential of MCP as a protocol for the future of AI-driven business logic.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Jeremiah Lowin about the FastMCP framework and how to design and build your own MCP serversInterviewIntroductionHow did you get involved in machine learning?Can you start by describing what MCP is and its purpose in the ecosystem of AI applications?What is FastMCP and what motivated you to create it?Recognizing that MCP is relatively young, how would you characterize the landscape of MCP frameworks?What are some of the stumbling blocks on the path to building a well engineered MCP server?What are the potential ramifications of poorly designed and implemented MCP implementations?In the overall context of an AI-powered/agentic application, what are the tradeoffs of investing in the MCP protocol? (e.g. engineering effort, process isolation, tool creation, auth(n|z), etc.)In your experience, what are the architectural patterns that you see of MCP implementation and usage?There are a multitude of MCP servers available for a variety of use cases. What are the key factors that someone should be using to evaluate their viability for a production use case?Can you give an overview of the key characteristics of FastMCP and why someone might select it as their implementation target for a custom MCP server?How have the design, scope, and goals of the project evolved since you first started working on it?For someone who is using FastMCP as the framework for creating their own AI tools, what are some of the design considerations or best practices that they should be aware of?What are some of the ways that someone might consider integrating FastMCP into their existing Python-powered web applications (e.g. FastAPI, Django, Flask, etc.)As you continue to invest your time and energy into FastMCP, what is your overall goal for the project?What are the most interesting, innovative, or unexpected ways that you have seen FastMCP used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on FastMCP?When is FastMCP the wrong choice?What do you have planned for the future of FastMCP?Contact InfoLinkedInGitHubParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksFastMCPFastMCP CloudPrefectModel Context Protocol (MCP)AI ToolsFastAPIPython DecoratorWebsocketsSSE == Server-Sent EventsStreamable HTTPOAuthMCP GatewayMCP SamplingFlaskDjangoASGIMCP ElicitationAuthKitDynamic Client RegistrationsmolagentsLarge Active ModelsA2AThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the AI Engineering Podcast Dr. Tara Javidi, CTO of KavAI, talks about developing AI systems for proactive monitoring in heavy industry. Dr. Javidi shares her background in mathematics and information theory, influenced by Claude Shannon's work, and discusses her approach to curiosity-driven AI that mimics human curiosity to improve data collection and predictive analytics. She explains how KavAI's platform uses generative AI models to enhance industrial monitoring by addressing informational blind spots and reducing reliance on human oversight. The conversation covers the architecture of KavAI's systems, integrating AI with existing workflows, building trust with operators, and the societal impact of AI in preventing environmental catastrophes, ultimately highlighting the future potential of information-centric AI models.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systems.Your host is Tobias Macey and today I'm interviewing Dr. Tara Javidi about building AI systems for proactive monitoring of physical environments for heavy industryInterviewIntroductionHow did you get involved in machine learning?Can you describe what KavAI is and the story behind it?What are some of the current state-of-the-art applications of AI/ML for monitoring and accident prevention in industrial environments?What are the shortcomings of those approaches?What are some examples of the types of harm that you are focused on preventing or mitigating with your platform?On your site it mentions that you have created a foundation model for physical awareness. What are some examples of the types of predictive/generative capabilities that your model provides?A perennial challenge when building any digital model of a physical system is the lack of absolute fidelity. What are the key sources of information acquisition that you rely on for your platform?In addition to your foundation model, what are the other systems that you incorporate to perform analysis and catalyze action?Can you describe the overall system architecture of your platform?What are some of the ways that you are able to integrate learnings across industries and environments to improve the overall capacity of your models?What are the most interesting, innovative, or unexpected ways that you have seen KavAI used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on KavAI?When is KavAI/Physical AI the wrong choice?What do you have planned for the future of KavAI?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?LinksKavAIInformation TheoryClaude ShannonThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the AI Engineering Podcast machine learning engineer Shashank Kapadia explores the transformative role of generative AI in retail. Shashank shares his journey from an engineering background to becoming a key player in ML, highlighting the excitement of understanding human behavior at scale through AI. He discusses the challenges and opportunities presented by generative AI in retail, where it complements traditional ML by enhancing explainability and personalization, predicting consumer needs, and driving autonomous shopping agents and emotional commerce. Shashank elaborates on the architectural and operational shifts required to integrate generative AI into existing systems, emphasizing orchestration, safety nets, and continuous learning loops, while also addressing the balance between building and buying AI solutions, considering factors like data privacy and customization.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Shashank Kapadia about applications of generative AI in retailInterviewIntroductionHow did you get involved in machine learning?Can you summarize the main applications of generative AI that you are seeing the most benefit from in retail/ecommerce?What are the major architectural patterns that you are deploying for generative AI workloads?Working at an organization like WalMart, you already had a substantial investment in ML/MLOps. What are the elements of that organizational capability that remain the same, and what are the catalyzed changes as a result of generative models?When working at the scale of Walmart, what are the different types of bottlenecks that you encounter which can be ignored at smaller orders of magnitude?Generative AI introduces new risks around brand reputation, accuracy, trustworthiness, etc. What are the architectural components that you find most effective in managing and monitoring the interactions that you provide to your customers?Can you describe the architecture of the technical systems that you have built to enable the organization to take advantage of generative models?What are the human elements that you rely on to ensure the safety of your AI products?What are the most interesting, innovative, or unexpected ways that you have seen generative AI break at scale?What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI?When is generative AI the wrong choice?What are your paying special attention to over the next 6 - 36 months in AI?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksWalmart LabsThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the AI Engineering podcast Adam Honig, founder of Spiro AI, about using AI to automate CRM systems, particularly in the manufacturing sector. Adam shares his journey from running a consulting company focused on Salesforce to founding Spiro, and discusses the challenges of traditional CRM systems where data entry is often neglected. He explains how Spiro addresses this issue by automating data collection from emails, phone calls, and other communications, providing a rich dataset for machine learning models to generate valuable insights. Adam highlights how Spiro's AI-driven CRM system is tailored to the manufacturing industry's unique needs, where sales are relationship-driven rather than funnel-based, and emphasizes the importance of understanding customer interactions and order histories to predict future business opportunities. The conversation also touches on the evolution of AI models, leveraging powerful third-party APIs, managing context windows, and platform dependencies, with Adam sharing insights into Spiro's future plans, including product recommendations and dynamic data modeling approaches.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Adam Honig about using AI to automate CRM maintenanceInterviewIntroductionHow did you get involved in machine learning?Can you describe what Spiro is and the story behind it?What are the specific challenges posed by the manufacturing industry with regards to sales and customer interactions?How does the type of manufacturing and target customer influence the level of effort and communication involved in the sales and customer service cycles?Before we discuss the opportunities for automation, can you describe the typical interaction patterns and workflows involved in the care and feeding of CRM systems?Spiro has been around since 2014, long pre-dating the current era of generative models. What were your initial targets for improving efficiency and reducing toil for your customers with the aid of AI/ML?How have the generational changes of deep learning and now generative AI changed the ways that you think about what is possible in your product?Generative models reduce the level of effort to get a proof of concept for language-oriented workflows. How are you pairing them with more narrow AI that you have built?Can you describe the overall architecture of your platform and how it has evolved in recent years?While generative models are powerful, they can also become expensive, and the costs are hard to predict. How are you thinking about vendor selection and platform risk in the application of those models?What are the opportunities that you see for the adoption of more autonomous applications of language models in your product? (e.g. agents)What are the confidence building steps that you are focusing on as you investigate those opportunities?What are the most interesting, innovative, or unexpected ways that you have seen Spiro used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI in the CRM space?When is AI the wrong choice for CRM workflows?What do you have planned for the future of Spiro?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksSpiroDeepgramCognee EpisodeAgentic MemoryGraphRAGPodcast EpisodeOpenAI Assistant APIThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the AI Engineering podcast Anush Elangovan, VP of AI software at AMD, discusses the strategic integration of software and hardware at AMD. He emphasizes the open-source nature of their software, fostering innovation and collaboration in the AI ecosystem, and highlights AMD's performance and capability advantages over competitors like NVIDIA. Anush addresses challenges and opportunities in AI development, including quantization, model efficiency, and future deployment across various platforms, while also stressing the importance of open standards and flexible solutions that support efficient CPU-GPU communication and diverse AI workloads.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Anush Elangovan about AMD's work to expand the playing field for AI training and inferenceInterviewIntroductionHow did you get involved in machine learning?Can you describe what your work at AMD is focused on?A lot of the current attention on hardware for AI training and inference is focused on the raw GPU hardware. What is the role of the software stack in enabling and differentiating that underlying compute?CUDA has gained a significant amount of attention and adoption in the numeric computation space (AI, ML, scientific computing, etc.). What are the elements of platform risk associated with relying on CUDA as a developer or organization?The ROCm stack is the key element in AMD's AI and HPC strategy. What are the elements that comprise that ecosystem?What are the incentives for anyone outside of AMD to contribute to the ROCm project?How would you characterize the current competitive landscape for AMD across the AI/ML lifecycle stages? (pre-training, post-training, inference, fine-tuning)For teams who are focused on inference compute for model serving, what do they need to know/care about in regards to AMD hardware and the ROCm stack?What are the most interesting, innovative, or unexpected ways that you have seen AMD/ROCm used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on AMD's AI software ecosystem?When is AMD/ROCm the wrong choice?What do you have planned for the future of ROCm?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksImageNetAMDROCmCUDAHuggingFaceLlama 3Llama 4QwenDeepSeek R1MI300XNokia SymbianUALink StandardQuantizationHIPIFYROCm TritonAMD Strix HaloAMD EpycLiquid NetworksMAMBA ArchitectureTransformer ArchitectureNPU == Neural Processing Unitllama.cppOllamaPerplexity ScoreNUMA == Non-Uniform Memory AccessvLLMSGLangThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the Machine Learning Podcast Ori Silberberg, VP of Engineering at Buildots, talks about transforming the construction industry with AI. Ori shares how Buildots uses computer vision and AI to optimize construction projects by providing real-time feedback, reducing delays, and improving efficiency. Learn about the complexities of digitizing the construction industry, the technical architecture of Buildoz, and how its AI-driven solutions create a digital twin of construction sites. Ori emphasizes the importance of explainability and actionable insights in AI decision-making, highlighting the potential of generative AI to further enhance the construction process from planning to execution.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Ori Silberberg about applications of AI for optimizing building constructionInterviewIntroductionHow did you get involved in machine learning?Can you describe what Buildotds is and the story behind it?What types of construction projects are you focused on? (e.g. residential, commercial, industrial, etc.)What are the main types of inefficiencies that typically occur on those types of job sites?What are the manual and technical processes that the industry has typically relied on to address those sources of waste and delay?In many ways the construction industry is as old as civilization. What are the main ways that the information age has transformed construction?What are the elements of the construction industry that make it resistant to digital transformation?Can you describe how you are applying AI to this complex and messy problem?What are the types of data that you are able to collect?How are you automating that data collection so that construction crews don't have to add extra work or distractions to their day?For construction crews that are using Buildots, can you talk through how it integrates into the overall process from site planning to project completion?Can you describe the technical architecture of the Buildots platform?Given the safety critical nature of construction, how does that influence the way that you think about the types of AI models that you use and where to apply them?What are the most interesting, innovative, or unexpected ways that you have seen Buildots used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Buildots?What do you have planned for the future of AI usage at Buildots?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksBuildotsCAD == Computer Aided DesignComputer VisionLIDARGC == General ContractorKubernetesThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the AI Engineering Podcast Jamie De Guerre, founding SVP of product at Together.ai, explores the role of open models in the AI economy. As a veteran of the AI industry, including his time leading product marketing for AI and machine learning at Apple, Jamie shares insights on the challenges and opportunities of operating open models at speed and scale. He delves into the importance of open source in AI, the evolution of the open model ecosystem, and how Together.ai's AI acceleration cloud is contributing to this movement with a focus on performance and efficiency.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Jamie de Guerre about the role of open models in the AI economy and how to operate them at speed and at scaleInterviewIntroductionHow did you get involved in machine learning?Can you describe what Together AI is and the story behind it?What are the key goals of the company?The initial rounds of open models were largely driven by massive tech companies. How would you characterize the current state of the ecosystem that is driving the creation and evolution of open models?There was also a lot of argument about what "open source" and "open" means in the context of ML/AI models, and the different variations of licenses being attached to them (e.g. the Meta license for Llama models). What is the current state of the language used and understanding of the restrictions/freedoms afforded?What are the phases of organizational/technical evolution from initial use of open models through fine-tuning, to custom model development?Can you outline the technical challenges companies face when trying to train or run inference on large open models themselves?What factors should a company consider when deciding whether to fine-tune an existing open model versus attempting to train a specialized one from scratch?While Transformers dominate the LLM landscape, there's ongoing research into alternative architectures. Are you seeing significant interest or adoption of non-Transformer architectures for specific use cases? When might those other architectures be a better choice?While open models offer tremendous advantages like transparency, control, and cost-effectiveness, are there scenarios where relying solely on them might be disadvantageous?When might proprietary models or a hybrid approach still be the better choice for a specific problem?Building and scaling AI infrastructure is notoriously complex. What are the most significant technical or strategic challenges you've encountered at Together AI while enabling scalable access to open models for your users?What are the most interesting, innovative, or unexpected ways that you have seen open models/the TogetherAI platform used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on powering AI model training and inference?Where do you see the open model space heading in the next 1-2 years? Any specific trends or breakthroughs you anticipate?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksTogether AIFine TuningPost-TrainingSalesforce ResearchMistralAgentforceLlama ModelsRLHF == Reinforcement Learning from Human FeedbackRLVR == Reinforcement Learning from Verifiable RewardsTest Time ComputeHuggingFaceRAG == Retrieval Augmented GenerationPodcast EpisodeGoogle GemmaLlama 4 MaverickPrompt EngineeringvLLMSGLangHazy Research labState Space ModelsHyena ModelMamba ArchitectureDiffusion Model ArchitectureStable DiffusionBlack Forest Labs Flux ModelNvidia BlackwellPyTorchRustDeepseek R1GGUFPika Text To VideoThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the AI Engineering Podcast, host Tobias Macey sits down with Ben Wilde, Head of Innovation at Georgian, to explore the transformative impact of agentic AI on business operations and the SaaS industry. From his early days working with vintage AI systems to his current focus on product strategy and innovation in AI, Ben shares his expertise on what he calls the "continuum" of agentic AI - from simple function calls to complex autonomous systems. Join them as they discuss the challenges and opportunities of integrating agentic AI into business systems, including organizational alignment, technical competence, and the need for standardization. They also dive into emerging protocols and the evolving landscape of AI-driven products and services, including usage-based pricing models and advancements in AI infrastructure and reliability.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Ben Wilde about the impact of agentic AI on business operations and SaaS as we know itInterviewIntroductionHow did you get involved in machine learning?Can you start by sharing your definition of what constitutes "agentic AI"?There have been several generations of automation for business and product use cases. In your estimation, what are the substantive differences between agentic AI and e.g. RPA (Robotic Process Automation)?How do the inherent risks and operational overhead impact the calculus of whether and where to apply agentic capabilities?For teams that are aiming for agentic capabilities, what are the stepping stones along that path?Beyond the technical capacity, there are numerous elements of organizational alignment that are required to make full use of the capabilities of agentic processes. What are some of the strategic investments that are necessary to get the whole business pointed in the same direction for adopting and benefitting from AI agents?The most recent splash in the space of agentic AI is the introduction of the Model Context Protocol, and various responses to it. What do you see as the near and medium term impact of this effort on the ecosystem of AI agents and their architecture?Software products have gone through several major evolutions since the days of CD-ROMs in the 90s. The current era has largely been oriented around the model of subscription-based software delivered via browser or mobile-based UIs over the internet. How does the pending age of AI agents upend that model?What are the most interesting, innovative, or unexpected ways that you have seen agentic AI used for business and product capabilities?What are the most interesting, unexpected, or challenging lessons that you have learned while working with businesses adopting agentic AI capabilities?When is agentic AI the wrong choice?What are the ongoing developments in agentic capabilities that you are monitoring?Contact InfoEmailLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksGeorgianAgentic Platforms And ApplicationsDifferential PrivacyAgentic AILanguage ModelReasoning ModelRobotic Process AutomationOFACOpenAI Deep ResearchModel Context ProtocolGeorgian AI Adoption SurveyGoogle Agent to Agent ProtocolGraphQLTPU == Tensor Processing UnitChris LattnerCUDANeuroSymbolic AIPrologThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the AI Engineering Podcast Kasimir Schulz, Director of Security Research at HiddenLayer, talks about the complexities and security challenges in AI and machine learning models. Kasimir explains the concept of shadow genes and shadow logic, which involve identifying common subgraphs within neural networks to understand model ancestry and potential vulnerabilities, and emphasizes the importance of understanding the attack surface in AI integrations, scanning models for security threats, and evolving awareness in AI security practices to mitigate risks in deploying AI systems.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Kasimir Schulz about the relationships between the various models on the market and how that information helps with selecting and protecting models for your applicationsInterviewIntroductionHow did you get involved in machine learning?Can you start by outlining the current state of the threat landscape for ML and AI systems?What are the main areas of overlap in risk profiles between prediction/classification and generative models? (primarily from an attack surface/methodology perspective)What are the significant points of divergence?What are some of the categories of potential damages that can be created through the deployment of compromised models?How does the landscape of foundation models introduce new challenges around supply chain security for organizations building with AI?You recently published your findings on the potential to inject subgraphs into model architectures that are invisible during normal operation of the model. Along with that you wrote about the subgraphs that are shared between different classes of models. What are the key learnings that you would like to highlight from that research?What action items can organizations and engineering teams take in light of that information?Platforms like HuggingFace offer numerous variations of popular models with variations around quantization, various levels of finetuning, model distillation, etc. That is obviously a benefit to knowledge sharing and ease of access, but how does that exacerbate the potential threat in the face of backdoored models?Beyond explicit backdoors in model architectures, there are numerous attack vectors to generative models in the form of prompt injection, "jailbreaking" of system prompts, etc. How does the knowledge of model ancestry help with identifying and mitigating risks from that class of threat?A common response to that threat is the introduction of model guardrails with pre- and post-filtering of prompts and responses. How can that approach help to address the potential threat of backdoored models as well?For a malicious actor that develops one of these attacks, what is the vector for introducing the compromised model into an organization?Once that model is in use, what are the possible means by which the malicious actor can detect its presence for purposes of exploitation?What are the most interesting, innovative, or unexpected ways that you have seen the information about model ancestry used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on ShadowLogic/ShadowGenes?What are some of the other means by which the operation of ML and AI systems introduce attack vectors to organizations running them?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksHiddenLayerZero-Day VulnerabilityMCP Blog PostPython Pickle Object SerializationSafeTensorsDeepseekHuggingface TransformersKROP == Knowledge Return Oriented PromptingXKCD "Little Bobby Tables"OWASP Top 10 For LLMsCVE AI Systems Working GroupRefusal Vector AblationFoundation ModelShadowLogicShadowGenesBytecodeResNet == Resideual Neural NetworkYOLO == You Only Look OnceNetronBERTRoBERTAShodanCTF == Capture The FlagTitan Bedrock Image GeneratorThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the AI Engineering podcast Julian LaNeve, CTO of Astronomer, talks about transitioning from simple LLM applications to more complex agentic AI systems. Julian shares insights into the challenges and considerations of this evolution, emphasizing the importance of starting with simpler applications to build operational knowledge and intuition. He discusses the parallels between microservices and agentic AI, highlighting the need for careful orchestration and observability to manage complexity and ensure reliability, and explores the technical requirements for deploying AI systems, including data infrastructure, orchestration tools like Apache Airflow, and understanding the probabilistic nature of AI models.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsSeamless data integration into AI applications often falls short, leading many to adopt RAG methods, which come with high costs, complexity, and limited scalability. Cognee offers a better solution with its open-source semantic memory engine that automates data ingestion and storage, creating dynamic knowledge graphs from your data. Cognee enables AI agents to understand the meaning of your data, resulting in accurate responses at a lower cost. Take full control of your data in LLM apps without unnecessary overhead. Visit aiengineeringpodcast.com/cognee to learn more and elevate your AI apps and agents.Your host is Tobias Macey and today I'm interviewing Julian LaNeve about how to avoid putting the cart before the horse with AI applications. When do you move from "simple" LLM apps to agentic AI and what's the path to get there?InterviewIntroductionHow did you get involved in machine learning?How do you technically distinguish "agentic AI" (e.g., involving planning, tool use, memory) from "simpler LLM workflows" (e.g., stateless transformations, RAG)? What are the key differences in operational complexity and potential failure modes?What specific technical challenges (e.g., state management, observability, non-determinism, prompt fragility, cost explosion) are often underestimated when teams jump directly into building stateful, autonomous agents?What are the pre-requisites from a data and infrastructure perspective before going to production with agentic applications?How does that differ from the chat-based systems that companies might be experimenting with?Technically, where do you most often see ambitious agent projects break down during development or early deployment?Beyond generic data quality, what specific data engineering practices become critical when building reliable LLM applications? (e.g., Designing data pipelines for efficient RAG chunking/embedding, versioning prompts alongside data, caching strategies for LLM calls, managing vector database ETL).From an implementation complexity standpoint, what characterizes tasks well-suited for initial LLM workflow adoption versus those genuinely requiring agentic capabilities?Can you share examples (anonymized if necessary) highlighting how organizations successfully engineered these simpler LLM workflows? What specific technical designs, tooling choices, or MLOps practices were key to their reliability and scalability?What are some hard-won technical or operational lessons from deploying and scaling LLM workflows in production environments? Any surprising performance bottlenecks, cost issues, or monitoring challenges engineers should anticipate?What technical maturity signals (e.g., robust CI/CD for ML, established monitoring/alerting for pipelines, automated evaluation frameworks, cost tracking mechanisms) suggest an engineering team might be ready to tackle the challenges of building and operating agentic systems?How does the technical stack and engineering process need to evolve when moving from orchestrated LLM workflows towards more complex agents involving memory, planning, and dynamic tool use? What new components and failure modes must be engineered for?How do you foresee orchestration platforms evolving to better serve the needs of AI engineers building LLM apps? What are the most interesting, innovative, or unexpected ways that you have seen organizations build toward advanced AI use cases?What are the most interesting, unexpected, or challenging lessons that you have learned while working on supporting AI services?When is AI the wrong choice?What is the single most critical piece of engineering advice you would give to fellow AI engineers who are tasked with integrating LLMs into production systems right now?Contact InfoLinkedInGitHubParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?LinksAstronomerAirflowAnthropicBuilding Effective Agents post from AnthropicAirflow 3.0MicroservicesPydantic AILangchainLlamaIndexLLM As A JudgeSWE (SoftWare Engineer) BenchCursorWindsurfOpenTelemetryDAG == Directed Acyclic GraphHalting ProblemAI Long Term MemoryThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the AI Engineering Podcast Emmanouil (Manos) Koukoumidis, CEO of Oumi, about his vision for an open platform for building, evaluating, and deploying AI foundation models. Manos shares his journey from working on natural language AI services at Google Cloud to founding Oumi with a mission to advance open-source AI, emphasizing the importance of community collaboration and accessibility. He discusses the need for open-source models that are not constrained by proprietary APIs, highlights the role of Oumi in facilitating open collaboration, and touches on the complexities of model development, open data, and community-driven advancements in AI. He also explains how Oumi can be used throughout the entire lifecycle of AI model development, post-training, and deployment.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Manos Koukoumidis about Oumi, an all-in-one production-ready open platform to build, evaluate, and deploy AI modelsInterviewIntroductionHow did you get involved in machine learning?Can you describe what Oumi is and the story behind it?There are numerous projects, both full suites and point solutions, focused on every aspect of "AI" development. What is the unique value that Oumi provides in this ecosystem?You have stated the desire for Oumi to become the Linux of AI development. That is an ambitious goal and one that Linux itself didn't start with. What do you see as the biggest challenges that need addressing to reach a critical mass of adoption?In the vein of "open source" AI, the most notable project that I'm aware of that fits the proper definition is the OLMO models from AI2. What lessons have you learned from their efforts that influence the ways that you think about your work on Oumi?On the community building front, HuggingFace has been the main player. What do you see as the benefits and shortcomings of that platform in the context of your vision for open and collaborative AI?Can you describe the overall design and architecture of Oumi?How did you approach the selection process for the different components that you are building on top of?What are the extension points that you have incorporated to allow for customization/evolution?Some of the biggest barriers to entry for building foundation models are the cost and availability of hardware used for training, and the ability to collect and curate the data needed. How does Oumi help with addressing those challenges?For someone who wants to build or contribute to an open source model, what does that process look like?How do you envision the community building/collaboration process?Your overall goal is to build a foundation for the growth and well-being of truly open AI. How are you thinking about the sustainability of the project and the funding needed to grow and support the community?What are the most interesting, innovative, or unexpected ways that you have seen Oumi used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Oumi?When is Oumi the wrong choice?What do you have planned for the future of Oumi?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksOumiCloud PaLMGoogle GeminiDeepMindLSTM == Long Short-Term MemoryTransfomers)ChatGPTPartial Differential EquationOLMOOSI AI definitionMLFlowMetaflowSkyPilotLlamaRAGPodcast EpisodeSynthetic DataPodcast EpisodeLLM As JudgeSGLangvLLMFunction Calling LeaderboardDeepseekThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the AI Engineering Podcast Adil Hafiz talks about the Arch project, a gateway designed to simplify the integration of AI agents into business systems. He discusses how the gateway uses Rust and Envoy to provide a unified interface for handling prompts and integrating large language models (LLMs), allowing developers to focus on core business logic rather than AI complexities. The conversation also touches on the target audience, challenges, and future directions for the project, including plans to develop a leading planning LLM and enhance agent interoperability.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Adil Hafeez about the Arch project, a gateway for your AI agentsInterviewIntroductionHow did you get involved in machine learning?Can you describe what Arch is and the story behind it?How do you think about the target audience for Arch and the types of problems/projects that they are responsible for?The general category of LLM gateways is largely oriented toward abstracting the specific model provider being called. What are the areas of overlap and differentiation in Arch?Many of the features in Arch are also available in AI frameworks (e.g. LangChain, LlamaIndex, etc.), such as request routing, guardrails, and tool calling. How do you think about the architectural tradeoffs of having that functionality in a gateway service?What is the workflow for someone building an application with Arch?Can you describe the architecture and components of the Arch gateway?With the pace of change in the AI/LLM ecosystem, how have you designed the Arch project to allow for rapid evolution and extensibility?What are the most interesting, innovative, or unexpected ways that you have seen Arch used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Arch?When is Arch the wrong choice?What do you have planned for the future of Arch?Contact InfoLinkedInGitHubParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksArch GatewayGradient BoostingEnvoyLLM GatewayHuggingfaceKatanemo ModelsQwen2.5Rust ClippyThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the AI Engineering Podcast Ali Golshan, co-founder and CEO of Gretel.ai, talks about the transformative role of synthetic data in AI systems. Ali explains how synthetic data can be purpose-built for AI use cases, emphasizing privacy, quality, and structural stability. He highlights the shift from traditional methods to using language models, which offer enhanced capabilities in understanding data's deep structure and generating high-quality datasets. The conversation explores the challenges and techniques of integrating synthetic data into AI systems, particularly in production environments, and concludes with insights into the future of synthetic data, including its application in various industries, the importance of privacy regulations, and the ongoing evolution of AI systems.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsSeamless data integration into AI applications often falls short, leading many to adopt RAG methods, which come with high costs, complexity, and limited scalability. Cognee offers a better solution with its open-source semantic memory engine that automates data ingestion and storage, creating dynamic knowledge graphs from your data. Cognee enables AI agents to understand the meaning of your data, resulting in accurate responses at a lower cost. Take full control of your data in LLM apps without unnecessary overhead. Visit aiengineeringpodcast.com/cognee to learn more and elevate your AI apps and agents.Your host is Tobias Macey and today I'm interviewing Ali Golshan about the role of synthetic data in building, scaling, and improving AI systemsInterviewIntroductionHow did you get involved in machine learning?Can you start by summarizing what you mean by synthetic data in the context of this conversation?How have the capabilities around the generation and integration of synthetic data changed across the pre- and post-LLM timelines?What are the motivating factors that would lead a team or organization to invest in synthetic data generation capacity?What are the main methods used for generation of synthetic data sets?How does that differ across open-source and commercial offerings?From a surface level it seems like synthetic data generation is a straight-forward exercise that can be owned by an engineering team. What are the main "gotchas" that crop up as you move along the adoption curve?What are the scaling characteristics of synthetic data generation as you go from prototype to production scale?domains/data types that are inappropriate for synthetic use cases (e.g. scientific or educational content)managing appropriate distribution of values in the generation processBeyond just producing large volumes of semi-random data (structured or otherwise), what are the other processes involved in the workflow of synthetic data and its integration into the different systems that consume it?What are the most interesting, innovative, or unexpected ways that you have seen synthetic data generation used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on synthetic data generation?When is synthetic data the wrong choice?What do you have planned for the future of synthetic data capabilities at Gretel?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksGretelHadoopLSTM == Long Short-Term MemoryGAN == Generative Adversarial NetworkTextbooks are all you need MSFT paperIlluminaThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the AI Engineering podcast Viraj Mehta, CTO and co-founder of TensorZero, talks about the use of LLM gateways for managing interactions between client-side applications and various AI models. He highlights the benefits of using such a gateway, including standardized communication, credential management, and potential features like request-response caching and audit logging. The conversation also explores TensorZero's architecture and functionality in optimizing AI applications by managing structured data inputs and outputs, as well as the challenges and opportunities in automating prompt generation and maintaining interaction history for optimization purposes.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsSeamless data integration into AI applications often falls short, leading many to adopt RAG methods, which come with high costs, complexity, and limited scalability. Cognee offers a better solution with its open-source semantic memory engine that automates data ingestion and storage, creating dynamic knowledge graphs from your data. Cognee enables AI agents to understand the meaning of your data, resulting in accurate responses at a lower cost. Take full control of your data in LLM apps without unnecessary overhead. Visit aiengineeringpodcast.com/cognee to learn more and elevate your AI apps and agents. Your host is Tobias Macey and today I'm interviewing Viraj Mehta about the purpose of an LLM gateway and his work on TensorZeroInterviewIntroductionHow did you get involved in machine learning?What is an LLM gateway?What purpose does it serve in an AI application architecture?What are some of the different features and capabilities that an LLM gateway might be expected to provide?Can you describe what TensorZero is and the story behind it?What are the core problems that you are trying to address with Tensor0 and for whom?One of the core features that you are offering is management of interaction history. How does this compare to the "memory" functionality offered by e.g. LangChain, Cognee, Mem0, etc.?How does the presence of TensorZero in an application architecture change the ways that an AI engineer might approach the logic and control flows in a chat-based or agent-oriented project?Can you describe the workflow of building with Tensor0 and some specific examples of how it feeds back into the performance/behavior of an LLM?What are some of the ways in which the addition of Tensor0 or another LLM gateway might have a negative effect on the design or operation of an AI application?What are the most interesting, innovative, or unexpected ways that you have seen TensorZero used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on TensorZero?When is TensorZero the wrong choice?What do you have planned for the future of TensorZero?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksTensorZeroLLM GatewayLiteLLMOpenAIGoogle VertexAnthropicReinforcement LearningTokamak ReactorViraj RLHF PaperContextual Dueling BanditsDirect Preference OptimizationPartially Observable Markov Decision ProcessDSPyPyTorchCogneeMem0LangGraphDouglas HofstadterOpenAI GymOpenAI o1OpenAI o3Chain Of ThoughtThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the AI Engineering Podcast Ron Green, co-founder and CTO of KungFu AI, talks about the evolving landscape of AI systems and the challenges of harnessing generative AI engines. Ron shares his insights on the limitations of large language models (LLMs) as standalone solutions and emphasizes the need for human oversight, multi-agent systems, and robust data management to support AI initiatives. He discusses the potential of domain-specific AI solutions, RAG approaches, and mixture of experts to enhance AI capabilities while addressing risks. The conversation also explores the evolving AI ecosystem, including tooling and frameworks, strategic planning, and the importance of interpretability and control in AI systems. Ron expresses optimism about the future of AI, predicting significant advancements in the next 20 years and the integration of AI capabilities into everyday software applications.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsSeamless data integration into AI applications often falls short, leading many to adopt RAG methods, which come with high costs, complexity, and limited scalability. Cognee offers a better solution with its open-source semantic memory engine that automates data ingestion and storage, creating dynamic knowledge graphs from your data. Cognee enables AI agents to understand the meaning of your data, resulting in accurate responses at a lower cost. Take full control of your data in LLM apps without unnecessary overhead. Visit aiengineeringpodcast.com/cognee to learn more and elevate your AI apps and agents. Your host is Tobias Macey and today I'm interviewing Ron Green about the wheels that we need for harnessing the power of the generative AI engineInterviewIntroductionHow did you get involved in machine learning?Can you describe what you see as the main shortcomings of LLMs as a stand-alone solution (to anything)?The most established vehicle for harnessing LLM capabilities is the RAG pattern. What are the main limitations of that as a "product" solution?The idea of multi-agent or mixture-of-experts systems is a more sophisticated approach that is gaining some attention. What do you see as the pro/con conversation around that pattern?Beyond the system patterns that are being developed there is also a rapidly shifting ecosystem of frameworks, tools, and point solutions that plugin to various points of the AI lifecycle. How does that volatility hinder the adoption of generative AI in different contexts?In addition to the tooling, the models themselves are rapidly changing. How much does that influence the ways that organizations are thinking about whether and when to test the waters of AI?Continuing on the metaphor of LLMs and engines and the need for vehicles, where are we on the timeline in relation to the model T Ford?What are the vehicle categories that we still need to design and develop? (e.g. sedans, mini-vans, freight trucks, etc.)The current transformer architecture is starting to reach scaling limits that lead to diminishing returns. Given your perspective as an industry veteran, what are your thoughts on the future trajectory of AI model architectures?What is the ongoing role of regression style ML in the landscape of generative AI?What are the most interesting, innovative, or unexpected ways that you have seen LLMs used to power a "vehicle"?What are the most interesting, unexpected, or challenging lessons that you have learned while working in this phase of AI?When is generative AI/LLMs the wrong choice?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksKungfu.aiLlama open generative AI modelsChatGPTCopilotCursorRAG == Retrieval Augmented GenerationPodcast EpisodeMixture of ExpertsDeep LearningRandom ForestSupervised LearningActive Learning)Yann LeCunnRLHF == Reinforcement Learning from Human FeedbackModel T FordMamba selective state spaceLiquid NetworkChain of thoughtOpenAI o1Marvin MinskyVon Neumann ArchitectureAttention Is All You NeedMultilayer PerceptronDot ProductDiffusion ModelGaussian NoiseAlphaFold 3AnthropicSparse AutoencoderThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the AI Engineering Podcast Jim Olsen, CTO of ModelOp, talks about the governance of generative AI models and applications. Jim shares his extensive experience in software engineering and machine learning, highlighting the importance of governance in high-risk applications like healthcare. He explains that governance is more about the use cases of AI models rather than the models themselves, emphasizing the need for proper inventory and monitoring to ensure compliance and mitigate risks. The conversation covers challenges organizations face in implementing AI governance policies, the importance of technical controls for data governance, and the need for ongoing monitoring and baselines to detect issues like PII disclosure and model drift. Jim also discusses the balance between innovation and regulation, particularly with evolving regulations like those in the EU, and provides valuable perspectives on the current state of AI governance and the need for robust model lifecycle management.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Jim Olsen about governance of your generative AI models and applicationsInterviewIntroductionHow did you get involved in machine learning?Can you describe what governance means in the context of generative AI models? (e.g. governing the models, their applications, their outputs, etc.)Governance is typically a hybrid endeavor of technical and organizational policy creation and enforcement. From the organizational perspective, what are some of the difficulties that teams are facing in understanding what those policies need to encompass?How much familiarity with the capabilities and limitations of the models is necessary to engage productively with policy debates?The regulatory landscape around AI is still very nascent. Can you give an overview of the current state of legal burden related to AI?What are some of the regulations that you consider necessary but as-of-yet absent?Data governance as a practice typically relates to controls over who can access what information and how it can be used. The controls for those policies are generally available in the data warehouse, business intelligence, etc. What are the different dimensions of technical controls that are needed in the application of generative AI systems?How much of the controls that are present for governance of analytical systems are applicable to the generative AI arena?What are the elements of risk that change when considering internal vs. consumer facing applications of generative AI?How do the modalities of the AI models impact the types of risk that are involved? (e.g. language vs. vision vs. audio)What are some of the technical aspects of the AI tools ecosystem that are in greatest need of investment to ease the burden of risk and validation of model use?What are the most interesting, innovative, or unexpected ways that you have seen AI governance implemented?What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI governance?What are the technical, social, and organizational trends of AI risk and governance that you are monitoring?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksModelOpFoundation ModelsGDPREU AI RegulationLlama 2AWS BedrockShadow ITRAG == Retrieval Augmented GenerationPodcast EpisodeNvidia NEMOLangChainShapley ValuesGibberish DetectionThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the AI Engineering Podcast, Vasilije Markovich talks about enhancing Large Language Models (LLMs) with memory to improve their accuracy. He discusses the concept of memory in LLMs, which involves managing context windows to enhance reasoning without the high costs of traditional training methods. He explains the challenges of forgetting in LLMs due to context window limitations and introduces the idea of hierarchical memory, where immediate retrieval and long-term information storage are balanced to improve application performance. Vasilije also shares his work on Cognee, a tool he's developing to manage semantic memory in AI systems, and discusses its potential applications beyond its core use case. He emphasizes the importance of combining cognitive science principles with data engineering to push the boundaries of AI capabilities and shares his vision for the future of AI systems, highlighting the role of personalization and the ongoing development of Cognee to support evolving AI architectures.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Vasilije Markovic about adding memory to LLMs to improve their accuracyInterviewIntroductionHow did you get involved in machine learning?Can you describe what "memory" is in the context of LLM systems?What are the symptoms of "forgetting" that manifest when interacting with LLMs?How do these issues manifest between single-turn vs. multi-turn interactions?How does the lack of hierarchical and evolving memory limit the capabilities of LLM systems?What are the technical/architectural requirements to add memory to an LLM system/application?How does Cognee help to address the shortcomings of current LLM/RAG architectures?Can you describe how Cognee is implemented?Recognizing that it has only existed for a short time, how have the design and scope of Cognee evolved since you first started working on it?What are the data structures that are most useful for managing the memory structures?For someone who wants to incorporate Cognee into their LLM architecture, what is involved in integrating it into their applications?How does it change the way that you think about the overall requirements for an LLM application?For systems that interact with multiple LLMs, how does Cognee manage context across those systems? (e.g. different agents for different use cases)There are other systems that are being built to manage user personalization in LLm applications, how do the goals of Cognee relate to those use cases? (e.g. Mem0 - https://github.com/mem0ai/mem0)What are the unknowns that you are still navigating with Cognee?What are the most interesting, innovative, or unexpected ways that you have seen Cognee used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Cognee?When is Cognee the wrong choice?What do you have planned for the future of Cognee?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksCogneeMontenegroCatastrophic ForgettingMulti-Turn InteractionRAG == Retrieval Augmented GenerationPodcast EpisodeGraphRAGPodcast EpisodeLong-term memoryShort-term memoryLangchainLlamaIndexHaystackdltData Engineering Podcast EpisodePineconePodcast EpisodeAgentic RAGAirflowDAG == Directed Acyclic GraphFalkorDBNeo4JPydanticAWS ECSAWS SNSAWS SQSAWS LambdaLLM As JudgeMem0QDrantLanceDBDuckDBThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryIn this episode of the AI Engineering Podcast, Tanner Burson, VP of Engineering at Prismatic, talks about the evolving impact of generative AI on software developers. Tanner shares his insights from engineering leadership and data engineering initiatives, discussing how AI is blurring the lines of developer roles and the strategic value of AI in software development. He explores the current landscape of AI tools, such as GitHub's Copilot, and their influence on productivity and workflow, while also touching on the challenges and opportunities presented by AI in code generation, review, and tooling. Tanner emphasizes the need for human oversight to maintain code quality and security, and offers his thoughts on the future of AI in development, the importance of balancing innovation with practicality, and the evolving role of engineers in an AI-driven landscape.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Tanner Burson about the impact of generative AI on software developersInterviewIntroductionHow did you get involved in machine learning?Can you describe what types of roles and work you consider encompassed by the term "developers" for the purpose of this conversation?How does your work at Prismatic give you visibility and insight into the effects of AI on developers and their work?There have been many competing narratives about AI and how much of the software development process it is capable of encompassing. What is your top-level view on what the long-term impact on the job prospects of software developers will be as a result of generative AI?There are many obvious examples of utilities powered by generative AI that are focused on software development. What do you see as the categories or specific tools that are most impactful to the development cycle?In what ways do you find familiarity with/understanding of LLM internals useful when applying them to development processes?As an engineering leader, how are you evaluating and guiding your team on the use of AI powered tools?What are some of the risks that you are guarding against as a result of AI in the development process?What are the most interesting, innovative, or unexpected ways that you have seen AI used in the development process?What are the most interesting, unexpected, or challenging lessons that you have learned while using AI for software development?When is AI the wrong choice for a developer?What are your projections for the near to medium term impact on the developer experience as a result of generative AI?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksPrismaticGoogle AI Development announcementTabninePodcast EpisodeGitHub CopilotPlandexOpenAI APIAmazon QOllamaHuggingface TransformersAnthropicLangchainLlamaindexHaystackLlama 3.2Qwen2.5-CoderThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryMachine learning workflows have long been complex and difficult to operationalize. They are often characterized by a period of research, resulting in an artifact that gets passed to another engineer or team to prepare for running in production. The MLOps category of tools have tried to build a new set of utilities to reduce that friction, but have instead introduced a new barrier at the team and organizational level. Donny Greenberg took the lessons that he learned on the PyTorch team at Meta and created Runhouse. In this episode he explains how, by reducing the number of opinions in the framework, he has also reduced the complexity of moving from development to production for ML systems.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Donny Greenberg about Runhouse and the current state of ML infrastructureInterviewIntroductionHow did you get involved in machine learning?What are the core elements of infrastructure for ML and AI?How has that changed over the past ~5 years?For the past few years the MLOps and data engineering stacks were built and managed separately. How does the current generation of tools and product requirements influence the present and future approach to those domains?There are numerous projects that aim to bridge the complexity gap in running Python and ML code from your laptop up to distributed compute on clouds (e.g. Ray, Metaflow, Dask, Modin, etc.). How do you view the decision process for teams trying to understand which tool(s) to use for managing their ML/AI developer experience?Can you describe what Runhouse is and the story behind it?What are the core problems that you are working to solve?What are the main personas that you are focusing on? (e.g. data scientists, DevOps, data engineers, etc.)How does Runhouse factor into collaboration across skill sets and teams?Can you describe how Runhouse is implemented?How has the focus on developer experience informed the way that you think about the features and interfaces that you include in Runhouse?How do you think about the role of Runhouse in the integration with the AI/ML and data ecosystem?What does the workflow look like for someone building with Runhouse?What is involved in managing the coordination of compute and data locality to reduce networking costs and latencies?What are the most interesting, innovative, or unexpected ways that you have seen Runhouse used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Runhouse?When is Runhouse the wrong choice?What do you have planned for the future of Runhouse?What is your vision for the future of infrastructure and developer experience in ML/AI?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksRunhouseGitHubPyTorchPodcast.__init__ EpisodeKubernetesBin PackingLinear RegressionGradient Boosted Decision TreeDeep LearningTransformer Architecture)SlurmSagemakerVertex AIMetaflowPodcast.__init__ EpisodeMLFlowDaskData Engineering Podcast EpisodeRayPodcast.__init__ EpisodeSparkDatabricksSnowflakeArgoCDPyTorch DistributedHorovodLlama.cppPrefectData Engineering Podcast EpisodeAirflowOOM == Out of MemoryWeights and BiasesKNativeBERT language modelThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
SummaryWith the growth of vector data as a core element of any AI application comes the need to keep those vectors up to date. When you go beyond prototypes and into production you will need a way to continue experimenting with new embedding models, chunking strategies, etc. You will also need a way to keep the embeddings up to date as your data changes. The team at Timescale created the pgai Vectorizer toolchain to let you manage that work in your Postgres database. In this episode Avthar Sewrathan explains how it works and how you can start using it today.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Avthar Sewrathan about the pgai extension for Postgres and how to run your AI workflows in your databaseInterviewIntroductionHow did you get involved in machine learning?Can you describe what pgai Vectorizer is and the story behind it?What are the benefits of using the database engine to execute AI workflows?What types of operations does pgai Vectorizer enable?What are some common generative AI patterns that can't be done with pgai?AI applications require a large and complex set of dependencies. How does that work with pgai Vectorizer and the Python runtime in Postgres?What are some of the other challenges or system pressures that are introduced by running these AI workflows in the database context?Can you describe how the pgai extension is implemented?With the rapid pace of change in the AI ecosystem, how has that informed the set of features that make sense in pgai Vectorizer and won't require rebuilding in 6 months?Can you describe the workflow of using pgai Vectorizer to build and maintain a set of embeddings in their database?How can pgai Vectorizer help with the situation of migrating to a new embedding model and having to reindex all of the content?How do you think about the developer experience for people who are working with pgai Vectorizer, as compared to using e.g. LangChain, LlamaIndex, etc.?What are the most interesting, innovative, or unexpected ways that you have seen pgai Vectorizer used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on pgai Vectorizer?When is pgai Vectorizer the wrong choice?What do you have planned for the future of pgai Vectorizer?Contact InfoLinkedInWebsiteParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksTimescalepgaiTransformer architecture for deep learningNeural NetworkspgvectorpgvectorscaleModalRAG == Retrieval Augmented GenerationSemantic SearchOllamaGraphRAGagensgraphLangChainLlamaIndexHaystackIVFFlatHNSWDiskANNRepl.it AgentBM25TSVectorParadeDBThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
loading
Comments (92)

Pawan Kalyan SK

Streaming ML with River delivers Digital Dopamine -level insights for real-time model adaptability!

Jul 29th
Reply

Pawan Kalyan SK

Impressive insights on how streaming ML with River enables Digital Dopamine-driven real-time adaptability! This episode is a must-listen for anyone exploring dynamic data modeling.

Jul 29th
Reply

obiiarticle

Real-time model updates via streaming are revolutionizing tech! At our software development company in Bangalore, we build intelligent systems that adapt instantly—enhancing performance, scalability, and decision-making across digital platforms. and click this link https://www.obiikriationz.com/web-development-company-bangalore

Jul 16th
Reply

obiiarticle

Real-time updates through streaming machine learning are game-changing! At our SEO company in Bangalore, we leverage such technologies to deliver dynamic, data-driven strategies that keep content and rankings continuously optimized. and visit https://www.obiikriationz.com/seo-company-bangalore

Jul 16th
Reply

jaanvi

great.https://chennaimassage.in

Mar 12th
Reply

jaanvi

best body massage in chennai https://chennaimassage.in

Mar 12th
Reply

alisha 2000

spa near me

Mar 4th
Reply

alisha 2000

looking for professional body massage in chennai https://acaraayur.com

Mar 4th
Reply

suman tiwari

Russian aescorts in Mumbai https://mumbaiescortss.net

Feb 28th
Reply

suman tiwari

Russian Escorts in Mumbai, Mumbai Russian Escorts, Russian Mumbai Escorts, Russian Escorts Mumbai, Russian Escorts Service, Russian Escorts

Feb 28th
Reply

Zion Cloud Solutions

Thanks for sharing. https://zionclouds.com/

Feb 27th
Reply

Rental Babe

I heard Deepseek AI was developed in China. Might it work like a Chinese product? https://www.rentalbabe.com

Feb 5th
Reply

Kavya Shinde

It's an informative post. You have https://www.kavyashinde.com/ shared such a unique and beneficial info with all of us.

Jan 21st
Reply

Natasha Jha

Thanks for sharing this with so much of detailed information. https://www.westdelhiescorts.com/dwarka-escorts.html

Jan 10th
Reply

sandeep kumar

Thanks for sharing this with so much of detailed information. https://digiclout.in/

Nov 29th
Reply

sandeep kumar

thanks for sharing us .https://thenexturbanmillionaire.com/

Nov 28th
Reply

sandeep kumar

Great Episode , Thanks from. https://cloutwebventures.com/

Nov 23rd
Reply

sandeep kumar

Great Episode. https://allthingsnatashaj.com/

Nov 6th
Reply

sandeep kumar

Great Episode , Thanks from https://sarahjosbeauty.com/

Sep 20th
Reply