DiscoverBuilding the Backend: Data Solutions that Power Leading Organizations
Building the Backend: Data Solutions that Power Leading Organizations
Claim Ownership

Building the Backend: Data Solutions that Power Leading Organizations

Author: Travis Lawrence

Subscribed: 13Played: 122
Share

Description

Welcome to the Building the Backend Podcast! We’re a data podcast focused on uncovering the data technologies, processes, and patterns that are driving today’s most successful companies. You will hear from data leaders sharing their knowledge and insights with what’s working and what’s not working for them. Our goal is to bring you valuable insights that will save you and your team time when building a modern data architecture in the cloud. Topics will span from big data, AI, ML, governance, visualizations, and best practices for enabling your organization to be data-driven. If you are a chief data officer, data architect, data engineer, data analyst, and those building the backend data solutions then HIT SUBSCRIBE!
43 Episodes
Reverse
In this episode we speak with Justin Borgman, Chairman & CEO at Starburst, which is based on open source Trino (formerly PrestoSQL) and was recently valued at $3.35 billion after securing their series D funding.  In this episode we discuss convergence of DW’s / DL's, why data lakes fail and much much more. Top 3 takeawaysThe data mesh architecture is gaining adoption more quickly in Europe due to GDPR.There were two main limitations of data lakes when comparing to DW’s, performance and CRUD operations. Performance has been resolved with query engines like Starburst and tools like Apache Iceberg, Apache Hudi and Delta Lake are starting to close the gap with CRUD operations. The principle of a single source of truth / storing everything in a single DL or DW is not always feasible or possible depending on regulations. Starburst is bridging that gap and enabling data mesh and data fabric architectures. 
In this episode we speak with Paul Singman Developer Advocate at Treeverse / LakeFS. LakeFS is an open source project  that allows you to transform your object storage into a Git-like repository. Top 3 takeawaysLakeFS enables use cases like debugging to quickly view historical versions of your data at a specific point in time and running ML experiments over the same set of data with branching..The current data landscape is very fragmented with many tools available.. Over the coming years there will most likely be consolidation of tools that are more open and integrated. Data quality and observability continue to be key components of successful data lakes and having visibility into job runs. 
In this episode we speak with Matt Topol, Vice President, Principal Software Architect @ FactSet and dive deep into how they are taking advantage of Apache Arrow for faster processing and data access. Below are the top 3 value bombs:Apache Arrow is an open-source in-memory columnar format that creates a standard way to share and process data structures.Apache Arrow Flight eliminates serialization and deserialization which enables faster access to query results compared to traditional JDBC and ODBC interfaces.Don’t put all your eggs in one basket, whether you're using commercial products or open source, make sure you design a modular architecture that does not tie you down to any one piece of technology.
In this episode we speak with Chad Sanderson  head of data and early stage startup advisor focused on data innovation @ Convoy and uncover their journey to implementing Amundsen, an open source data catalog.Below are the top 3 value bombs: Data Scientist’s should not be spending the majority of their time trying to find the data they are interested in. Amundsen is a powerful open source data catalog that integrates across your data landscape to provide visibility into your data assets and lineage. We often get lost in the features within data teams. It’s important to take a step back and understand how you're impacting the bottom line of the business. 
Your data team should not just be keeping the lights on, but should be building and creating data products to support the business. In this episode we speak with Murali Bhogavalli a data product manager and explore what is a data product manager and how they differ from a traditional product manager. Below are the top 3 value bombs: Data should be looked at as a product and treated as such within the organization (i.e. agile methodologies, continuous improvement…)Organizations need to be more than just data driven but also data informed. For that to happen, you need to build data literacy into your ecosystem by helping everybody understand what the data means and where is it coming from and the quality of it.. Product managers typically use data to deliver the outcomes. But for a data PM, data is the deliverable and it also the outcome.
In this episode of Building The Backend we hear from Mark Grover founder @ Stemma, co-creator of Amundsen. Stemma is a fully managed data catalog, powered by the leading open-source data catalog, Amundsen.Below are top 3 value bombs: Automated data catalogs are critical to help wrangle the growing data across organizations. (i.e. Being able to identify out of 150 columns on this table only 10 are being used downstream)Tribal knowledge and context cannot be automated - data catalogs cannot be 100% automated. Amundsen is an open-source data catalog originally created at Lyft. Stemma has created a managed version of Amundsen. Help me improve the podcast by completing this 60 second survey: https://buildingthebackend.com/survey
In this episode of Building The Backend we hear from Dipti Borkar cofounder @ Ahana  a managed service for Presto on AWS, where we talk all about the data lake, how it should be structured and where the industry is going. Below are top 3 value bombs: Presto is an open source distributed SQL query engine originally created by Facebook, mainly used to run SQL queries on data lakes but can be connected to relational data stores as well. Ahana is a managed Presto service on AWS with 3x price/performance. When optimizing your data lake, it’s normally best to store the data in Parquet or ORC format vs JSON or CSV as they are columnar formats that can have indexes built in. Data Lake Houses are continuing to gain popularity by bringing the benefits of your data lake and data warehouse together with the help of tools like  Databricks DeltaLake and Apache HUDI.
What tools are you using for data viz? Are they low cost? One option is Apache Superset, in this episode we speak with Robert Stolz to learn more about Superset and other open source data tools. Top 3 Value Bombs: One popular use case with Apache Superset is embedding it within applications because it’s open source, there is a wide range of flexibility to integrate it with existing systems.  Apache Superset supports any sources supported by the Python SQL toolkit called SQLAlchemy. DBT encourages a set of best practices around data development (i.e. source control and test driven development). 
In this episode of Building The Backend we hear from Simon Crosby – CTO @ Swim an open source edge computing operating system, where we talk all about edge computing, event streaming and much more. Below are top 3 value bombs: Edge means more than just being physically located somewhere it could also mean in the cloud. It really is the closest point of where your source data is being generated.Continuous intelligence is a design pattern where streaming data is directly tied into business operations. Kafka is continuing to hold it’s strong position in the event streaming space. 
This episode is a little different then the usual format. Instead of interviewing a data leader - I share what I consider are the 12 most important principles when designing a modern data architecture.  Please message me on LinkedIn with the thoughts on this show. 
In this episode of Building The Backend we hear from Prukalpa Sankar – Co-founder of Atlan, where we talk all about data quality/governance, common issues organizations face when implementing data quality and much much more. Below are top 3 value bombs: Data Governance has a bad reputation. It should not be a bureaucratic controlling process that is pushed from the top down. Active Metadata is key to modern data architectures, essentially it’s putting together all the human and machine generated metadata together to derive insights. One of the most difficult metadata attributes to capture is the context for the data as this almost always requires input from humans and tribal knowledge is often lost and is not documented.
This is a podcast episode you do not want to miss with Stephen Brobst, CTO @ Teradata. We discuss all things Data Warehouses, the shift to the distributed cloud and, key principles to implementing  successful DW's. Top 3 Value Bombs: Large organizations are shifting more to a distributed / inter-cloud architecture for many reasons, a couple of reasons are data sovereignty, increasing residency and reducing costs.Just because your DW does not support indexing does not mean you do not need them. One of the most common reasons DW’s fail is they are led by IT and not the business. The DW should be led directly by the business needs and most important initiatives. 
“The hardest part of ETL is not building the connectors, it is maintaining them.” Truer words never spoken. Really enjoyed this episode with Michel Tricot CEO & Co-Founder of Airbyte where we discuss all things data integration and connectors. Top 3 value bombs: The future of ETL/ELT integration connectors may lie with open source. Many closed source data integration tools only create connectors if the ROI is there, but this leaves many tools out and speed to market can be slow. Airbyte has created a modular open source framework that allows the community to quickly build reliable data connectors. As Airbyte starts to monetize they have some innovative methods, one of which is if a developer from the open source community creates and maintains a connector they could potentially get a small percentage of revenue associated with that connector. Data governance ang logging is  increasingly becoming more important in the coming years. 
This episode features Gleb Mezhanskiy Co-Founder & CEO @ Datafold, during our discussion we talk all about data observability and how to improve your data quality. Before Datafold, Gleb was a founding member of data teams at Lyft and Autodesk, where he built sophisticated data platforms and developed tooling to improve productivity and data quality.Top 3 Value Bombs:The foundation of any data observability platform is the data catalog. Data observability becomes increasingly difficult the more data sets you have if you do not define your process to track and monitor your data. Do not surprise your report consumers. knowing how your metrics will change in prod before your deployment can be done with the right data observability process and regression testing. 
This episode features Arjun Narayan Co-Founder & CEO @ Materialize, during our discussion we talk all about transforming streaming data, the do’s the don’ts and how Materialize is changing the landscape of streaming. Top 3 Value Bombs:When creating schema changes organizations should always strive to create forward compatible schema changes only. This means consumers will be able to consume your data model without impacting them, they just may be missing your newly added column.Materialized computations are bound to change in the future, either due to bugs or requirement changes. Kafka allows you to replay all your previous messages to update the calculation. The cloud is still young, over the coming years we will see many more technologies that are specifically built with a cloud focus. 
This episode features Jean-Yves Stephan Co-Founder & CEO @ Data Mechanics (recently Acq. by Spot by NetApp), during our discussion we talk about optimizing Spark to run in the cloud at a low cost.Top 3 Value Bombs:Running Spark CAN be expensive but there are ways to reduce your current operating costs by 50-75% by smart automations (i.e.  tune for node type, memory and CPU). Spot instances can lower your costs by utilizing unused instances. Creating serverless architectures and using containers will allow for more flexibility with deployment models and scalability. 
This episode features Josh Benamrum, who is the co-founder of Databand. Databand is a company that helps engineering teams achieve better observability and control over their tech stack.Top 3 Value Bombs: When observing our data we should be looking at our data and pipelinesDon’t wait till the board meeting for an incorrect metric to make DQ a priorityHaving clear SLA’s on just what data quality means across the organization is essential 
Travis welcomes to his podcast Saket Saurabh, who provides a window into the world of data management and the self-service options that are democratizing it. Co-founder and CEO of Nexla, Saket has a passion for data and infrastructure and how to improve its flow among partners, customers and vendors. Nexla automates various data engineering tasks, intelligently creates an abstraction of data and enables collaboration among people at different skill levels. Named a 2021 Cool Vendor by Gartner, Nexla is a leader in data preparation, integration and tracking.Top 3 value bombs: Data architectures overall need to be more abstract to enable future flexibilityThe first stumbling block for most organizations is not knowing where to locate their data.ETL is dead. The ELT model has become central while streaming and real-time use cases are becoming prevalent.
In this episode, we speak with Rob Hedgpeth, a  director of developer developer relations at Maria DB.  We explore all things Maria DB, the capabilities it has and when you should consider it for your next project.  Top 3 value bombs:MariaDB follows a shared nothing architecture and supports distributed SQL for unlimited scaling on demand.MariaDB can handle many types of storage (i.e. document store, graph and spatial)When deciding on your next relational database do not just look at options available within your cloud service provider, include Databases as a Service within your analysis (i.e. Sky SQL - Maris DB’s commercial product). 
In this episode, we speak with Lior Gavish, the co-founder of Monte Carlo to explore all things data quality. Monte Carlo is a data lineage and observability tool that lowers your data downtime.Top 3 Value Bombs:Data products should be thought of in it’s entirely from the source to the consumer.No one data stakeholder can solve data quality issues, it’s a collaboration of the data engineers, business, data consumer and even software to help automate certain aspects of cataloging and capturing meaningful metadata. Good data quality processes should alert you to anomalies in your metrics before your data consumers do. 
loading
Comments 
Download from Google Play
Download from App Store