DiscoverTech on the Rocks
Tech on the Rocks
Claim Ownership

Tech on the Rocks

Author: Kostas, Nitay

Subscribed: 0Played: 3
Share

Description

Join Kostas and Nitay as they speak with amazingly smart people who are building the next generation of technology, from hardware to cloud compute.

Tech on the Rocks is for people who are curious about the foundations of the tech industry.

Recorded primarily from our offices and homes, but one day we hope to record in a bar somewhere.

Cheers!
7 Episodes
Reverse
In this episode, we chat with Nitzan Shapira, co-founder and former CEO of Epsagon, which was acquired by Cisco in 2021. We explore Nitzan's journey from working in cybersecurity to building an observability platform for cloud applications, particularly focused on serverless architectures. We learn about the early days of serverless adoption, the challenges in making observability tools developer-friendly, and why distributed tracing was a key differentiator for Epsagon. We discuss the evolution of observability tools, the future impact of AI on both observability and software development, and the changing landscape of serverless computing. Finally, we hear Nitzan's current perspective on enterprise AI adoption from his role at Cisco, where he helps evaluate and build new AI-focused business lines.03:17 Transition from Security to Observability09:52 Exploring Ideas and Choosing Serverless16:43 Adoption of Distributed Tracing20:54 The Future of Observability25:26 Building a Product that Developers Love31:03 Challenges in Observability and Data Costs32:47 The Excitement and Evolution of Serverless35:44 Serverless as a Horizontal Platform37:15 The Future of Serverless and No-Code/Low-Code Tools38:15 Technical Limits and the Future of Serverless40:38 Navigating Near-Death Moments and Go-to-Market Challenges48:36 Cisco's Gen .AI Ecosystem and New Business Lines50:25 The State of the AI Ecosystem and Enterprise Adoption53:54 Using AI to Enhance Engineering and Product Development55:02 Using AI in Go-to-Market Strategies
From GPU computing pioneer to Kubernetes architect, Brian Grant takes us on a fascinating journey through his career at the forefront of systems engineering. In this episode, we explore his early work on GPU compilers in the pre-CUDA era, where he tackled unique challenges in high-performance computing when graphics cards weren't yet designed for general computation. Brian then shares insights from his time at Google, where he helped develop Borg and later became the original lead architect of Kubernetes. He explains key architectural decisions that shaped Kubernetes, from its extensible resource model to its approach to service discovery, and why they chose to create a rich set of abstractions rather than a minimal interface. The conversation concludes with Brian's thoughts on standardization challenges in cloud infrastructure and his vision for moving beyond infrastructure as code, offering valuable perspective on both the history and future of distributed systems.Links:Brian Grant LIChapters00:00 Introduction and Background03:11 Early Work in High-Performance Computing06:21 Challenges of Building Compilers for GPUs13:14 Influential Innovations in Compilers31:46 The Future of Compilers33:11 The Rise of Niche Programming Languages34:01 The Evolution of Google's Borg and Kubernetes39:06 Challenges of Managing Applications in a Dynamically Scheduled Environment48:12 The Need for Standardization in Application Interfaces and Management Systems01:00:55 Driving Network Effects and Creating Cohesive EcosystemsClick here to view the episode transcript.
In this episode, we chat with JP, creator of FizzBee, about formal methods and their application in software engineering. We explore the differences between coding and engineering, discussing how formal methods can improve system design and reliability. JP shares insights from his time at Google and explains why tools like FizzBee are crucial for distributed systems. We delve into the challenges of adopting formal methods in industry, the potential of FizzBee to make these techniques more accessible, and how it compares to other tools like TLA+. Finally, we discuss the future of software development, including the role of LLMs in code generation and the ongoing importance of human engineers in system design.LinksFizzBeeFizzBee Github RepoFizzBee BlogChapters00:00 Introduction and Overview02:42 JP's Experience at Google and the Growth of the Company04:51 The Difference Between Engineers and Coders06:41 The Importance of Rigor and Quality in Engineering10:08 The Limitations of QA and the Need for Formal Methods14:00 The Role of Best Practices in Software Engineering14:56 Design Specification Languages for System Correctness21:43 The Applicability of Formal Methods in Distributed Systems31:20 Getting Started with FizzBee: A Practical Example36:06 Common Assumptions and Misconceptions in Distributed Systems43:23 The Role of FizzBee in the Design Phase48:04 The Future of FizzBee: LLMs and Code Generation58:20 Getting Started with FizzBee: Tutorials and Online PlaygroundClick here to view the episode transcript.
In this episode, we chat with Dean Pleban, CEO of DagsHub, about machine learning operations. We explore the differences between DevOps and MLOps, focusing on data management and experiment tracking. Dean shares insights on versioning various components in ML projects and discusses the importance of user experience in MLOps tools. We also touch on DagsHub's integration of AI in their product and Dean's vision for the future of AI and machine learning in industry.LinksDagsHubThe MLOps PodcastDean on LIChapters00:00 Introduction and Background03:03 Challenges of Managing Machine Learning Projects10:00 The Concept of Experiments in Machine Learning12:51 Data Curation and Validation for High-Quality Data27:07 Connecting the Components of Machine Learning Projects with DAGS Hub29:12 The Importance of Data and Clear Interfaces43:29 Incorporating Machine Learning into DAGsHub51:27 The Future of ML and AI
In this episode, Kostas and Nitay are joined by Amey Chaugule and Matt Green, co-founders of Denormalized. They delve into how Denormalized is building an embedded stream processing engine—think “DuckDB for streaming”—to simplify real-time data workloads. Drawing from their extensive backgrounds at companies like Uber, Lyft, Stripe, and Coinbase. Amey and Matt discuss the challenges of existing stream processing systems like Spark, Flink, and Kafka. They explain how their approach leverages Apache DataFusion, to create a single-node solution that reduces the complexities inherent in distributed systems.The conversation explores topics such as developer experience, fault tolerance, state management, and the future of stream processing interfaces. Whether you’re a data engineer, application developer, or simply interested in the evolution of real-time data infrastructure, this episode offers valuable insights into making stream processing more accessible and efficient.Contacts & LinksAmey ChauguleMatt GreenDenormalizedDenormalized Github RepoChapters00:00 Introduction and Background12:03 Building an Embedded Stream Processing Engine18:39 The Need for Stream Processing in the Current Landscape22:45 Interfaces for Interacting with Stream Processing Systems26:58 The Target Persona for Stream Processing Systems31:23 Simplifying Stream Processing Workloads and State Management34:50 State and Buffer Management37:03 Distributed Computing vs. Single-Node Systems42:28 Cost Savings with Single-Node Systems47:04 The Power and Extensibility of Data Fusion55:26 Integrating Data Store with Data Fusion57:02 The Future of Streaming Systems01:00:18 intro-outro-fade.mp3Click here to view the episode transcript.
In this episode, we dive deep into the future of data infrastructure for AI and ML with Nikhil Simha and Varant Zanoyan, two seasoned engineers from Airbnb and Facebook. Nikhil and Varant share their journey from building real-time data systems and ML infrastructure at tech giants to launching their own venture.The conversation explores the intricacies of designing developer-friendly APIs, the complexities of handling both batch and streaming data, and the delicate balance between customer needs and product vision in a startup environment.Contacts & LinksNikhil SimhaVarant ZanoyanChronon projectChapters00:00 Introduction and Past Experiences04:38 The Challenges of Building Data Infrastructure for Machine Learning08:01 Merging Real-Time Data Processing with Machine Learning14:08 Backfilling New Features in Data Infrastructure20:57 Defining Failure in Data Infrastructure26:45 The Choice Between SQL and Data Frame APIs34:31 The Vision for Future Improvements38:17 Introduction to Chrono and Open Source43:29 The Future of Chrono: New Computation Paradigms48:38 Balancing Customer Needs and Vision57:21 Engaging with Customers and the Open Source Community01:01:26 Potential Use Cases and Future DirectionsClick here to view the episode transcript.
In this episode, we  chat with Chris Riccomini about the evolution of stream processing and the challenges in building applications on streaming systems. We also chat about leaky abstractions, good and bad API designs, what Chris loves and hates about Rust and finally about his exciting new project that involves object storage and LSMs. Connect with Chris at:LinkedInXBlogMaterialized View Newsletter - His newsletterThe missing README - His bookSlateDB - His latest OSS ProjectChapters00:00 Introduction and Background04:05 The State of Stream Processing Today08:53 The Limitations of SQL in Streaming Systems14:00 Prioritizing the Developer Experience in Stream Processing18:15  Improving the Usability of Streaming Systems27:54 The Potential of State Machine Programming in Complex Systems32:41 The Power of Rust: Compiling and Language Bindings34:06 The Shift from Sidecar to Embedded Libraries Driven by Rust35:49 Building an LSM on Object Storage: Cost-Effective State Management39:47 The Unbundling and Composable Nature of Databases47:30 The Future of Data Systems: More Companies and Focus on MetadataClick here to view the episode transcript.