Discover
Data Engineering Podcast

Data Engineering Podcast
Author: Tobias Macey
Subscribed: 3,285Played: 129,777Subscribe
Share
© 2023 Boundless Notions, LLC.
Description
This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.
404 Episodes
Reverse
Summary
If your business metrics looked weird tomorrow, would you know about it first? Anomaly detection is focused on identifying those outliers for you, so that you are the first to know when a business critical dashboard isn't right. Unfortunately, it can often be complex or expensive to incorporate anomaly detection into your data platform. Andrew Maguire got tired of solving that problem for each of the different roles he has ended up in, so he created the open source Anomstack project. In this episode he shares what it is, how it works, and how you can start using it today to get notified when the critical metrics in your business aren't quite right.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
Data projects are notoriously complex. With multiple stakeholders to manage across varying backgrounds and toolchains even simple reports can become unwieldy to maintain. Miro is your single pane of glass where everyone can discover, track, and collaborate on your organization's data. I especially like the ability to combine your technical diagrams with data documentation and dependency mapping, allowing your data engineers and data consumers to communicate seamlessly about your projects. Find simplicity in your most complex projects with Miro. Your first three Miro boards are free when you sign up today at dataengineeringpodcast.com/miro (https://www.dataengineeringpodcast.com/miro). That’s three free boards at dataengineeringpodcast.com/miro (https://www.dataengineeringpodcast.com/miro).
Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
Your host is Tobias Macey and today I'm interviewing Andrew Maguire about his work on the Anomstack project and how you can use it to run your own anomaly detection for your metrics
Interview
Introduction
How did you get involved in the area of data management?
Can you describe what Anomstack is and the story behind it?
What are your goals for this project?
What other tools/products might teams be evaluating while they consider Anomstack?
In the context of Anomstack, what constitutes a "metric"?
What are some examples of useful metrics that a data team might want to monitor?
You put in a lot of work to make Anomstack as easy as possible to get started with. How did this focus on ease of adoption influence the way that you approached the overall design of the project?
What are the core capabilities and constraints that you selected to provide the focus and architecture of the project?
Can you describe how Anomstack is implemented?
How have the design and goals of the project changed since you first started working on it?
What are the steps to getting Anomstack running and integrated as part of the operational fabric of a data platform?
What are the sharp edges that are still present in the system?
What are the interfaces that are available for teams to customize or enhance the capabilities of Anomstack?
What are the most interesting, innovative, or unexpected ways that you have seen Anomstack used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Anomstack?
When is Anomstack the wrong choice?
What do you have planned for the future of Anomstack?
Contact Info
LinkedIn (https://www.linkedin.com/in/andrewm4894/)
Twitter (https://twitter.com/@andrewm4894)
GitHub (http://github.com/andrewm4894)
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
Anomstack Github repo (http://github.com/andrewm4894/anomstack)
Airflow Anomaly Detection Provider Github repo (https://github.com/andrewm4894/airflow-provider-anomaly-detection)
Netdata (https://www.netdata.cloud/)
Metric Tree (https://www.datacouncil.ai/talks/designing-and-building-metric-trees)
Semantic Layer (https://en.wikipedia.org/wiki/Semantic_layer)
Prometheus (https://prometheus.io/)
Anodot (https://www.anodot.com/)
Chaos Genius (https://www.chaosgenius.io/)
Metaplane (https://www.metaplane.dev/)
Anomalo (https://www.anomalo.com/)
PyOD (https://pyod.readthedocs.io/)
Airflow (https://airflow.apache.org/)
DuckDB (https://duckdb.org/)
Anomstack Gallery (https://github.com/andrewm4894/anomstack/tree/main/gallery)
Dagster (https://dagster.io/)
InfluxDB (https://www.influxdata.com/)
TimeGPT (https://docs.nixtla.io/docs/timegpt_quickstart)
Prophet (https://facebook.github.io/prophet/)
GreyKite (https://linkedin.github.io/greykite/)
OpenLineage (https://openlineage.io/)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
The first step of data pipelines is to move the data to a place where you can process and prepare it for its eventual purpose. Data transfer systems are a critical component of data enablement, and building them to support large volumes of information is a complex endeavor. Andrei Tserakhau has dedicated his careeer to this problem, and in this episode he shares the lessons that he has learned and the work he is doing on his most recent data transfer system at DoubleCloud.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues for every part of your data workflow, from migration to deployment. Datafold has recently launched a 3-in-1 product experience to support accelerated data migrations. With Datafold, you can seamlessly plan, translate, and validate data across systems, massively accelerating your migration project. Datafold leverages cross-database diffing to compare tables across environments in seconds, column-level lineage for smarter migration planning, and a SQL translator to make moving your SQL scripts easier. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold) today!
Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
Your host is Tobias Macey and today I'm interviewing Andrei Tserakhau about operationalizing high bandwidth and low-latency change-data capture
Interview
Introduction
How did you get involved in the area of data management?
Your most recent project involves operationalizing a generalized data transfer service. What was the original problem that you were trying to solve?
What were the shortcomings of other options in the ecosystem that led you to building a new system?
What was the design of your initial solution to the problem?
What are the sharp edges that you had to deal with to operate and use that initial implementation?
What were the limitations of the system as you started to scale it?
Can you describe the current architecture of your data transfer platform?
What are the capabilities and constraints that you are optimizing for?
As you move beyond the initial use case that started you down this path, what are the complexities involved in generalizing to add new functionality or integrate with additional platforms?
What are the most interesting, innovative, or unexpected ways that you have seen your data transfer service used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on the data transfer system?
When is DoubleCloud Data Transfer the wrong choice?
What do you have planned for the future of DoubleCloud Data Transfer?
Contact Info
LinkedIn (https://www.linkedin.com/in/andrei-tserakhau/)
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
DoubleCloud (https://double.cloud/)
Kafka (https://kafka.apache.org/)
MapReduce (https://en.wikipedia.org/wiki/MapReduce)
Change Data Capture (https://en.wikipedia.org/wiki/Change_data_capture)
Clickhouse (https://clickhouse.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/clickhouse-data-warehouse-episode-88/)
Iceberg (https://iceberg.apache.org/)
Podcast Episode (https://www.dataengineeringpodcast.com/iceberg-with-ryan-blue-episode-52/)
Delta Lake (https://delta.io/)
Podcast Episode (https://www.dataengineeringpodcast.com/delta-lake-data-lake-episode-85/)
dbt (https://www.getdbt.com/)
OpenMetadata (https://open-metadata.org/)
Podcast Episode (https://www.dataengineeringpodcast.com/openmetadata-universal-metadata-layer-episode-237/)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Speaker - Andrei Tserakhau, DoubleCloud Tech Lead. He has over 10 years of IT engineering experience and for the last 4 years was working on distributed systems with a focus on data delivery systems.
Summary
Building a data platform that is enjoyable and accessible for all of its end users is a substantial challenge. One of the core complexities that needs to be addressed is the fractal set of integrations that need to be managed across the individual components. In this episode Tobias Macey shares his thoughts on the challenges that he is facing as he prepares to build the next set of architectural layers for his data platform to enable a larger audience to start accessing the data being managed by his team.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
Developing event-driven pipelines is going to be a lot easier - Meet Functions! Memphis functions enable developers and data engineers to build an organizational toolbox of functions to process, transform, and enrich ingested events “on the fly” in a serverless manner using AWS Lambda syntax, without boilerplate, orchestration, error handling, and infrastructure in almost any language, including Go, Python, JS, .NET, Java, SQL, and more. Go to dataengineeringpodcast.com/memphis (https://www.dataengineeringpodcast.com/memphis) today to get started!
Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
Your host is Tobias Macey and today I'll be sharing an update on my own journey of building a data platform, with a particular focus on the challenges of tool integration and maintaining a single source of truth
Interview
Introduction
How did you get involved in the area of data management?
data sharing
weight of history
existing integrations with dbt
switching cost for e.g. SQLMesh
de facto standard of Airflow
Single source of truth
permissions management across application layers
Database engine
Storage layer in a lakehouse
Presentation/access layer (BI)
Data flows
dbt -> table level lineage
orchestration engine -> pipeline flows
task based vs. asset based
Metadata platform as the logical place for horizontal view
Contact Info
LinkedIn (https://linkedin.com/in/tmacey)
Website (https://www.dataengineeringpodcast.com)
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
Monologue Episode On Data Platform Design (https://www.dataengineeringpodcast.com/data-platform-design-episode-268)
Monologue Episode On Leaky Abstractions (https://www.dataengineeringpodcast.com/abstractions-and-technical-debt-episode-374)
Airbyte (https://airbyte.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/airbyte-open-source-data-integration-episode-173/)
Trino (https://trino.io/)
Dagster (https://dagster.io/)
dbt (https://www.getdbt.com/)
Snowflake (https://www.snowflake.com/en/)
BigQuery (https://cloud.google.com/bigquery)
OpenMetadata (https://open-metadata.org/)
OpenLineage (https://openlineage.io/)
Data Platform Shadow IT Episode (https://www.dataengineeringpodcast.com/shadow-it-data-analytics-episode-121)
Preset (https://preset.io/)
LightDash (https://www.lightdash.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/lightdash-exploratory-business-intelligence-episode-232/)
SQLMesh (https://sqlmesh.readthedocs.io/)
Podcast Episode (https://www.dataengineeringpodcast.com/sqlmesh-open-source-dataops-episode-380)
Airflow (https://airflow.apache.org/)
Spark (https://spark.apache.org/)
Flink (https://flink.apache.org/)
Tabular (https://tabular.io/)
Iceberg (https://iceberg.apache.org/)
Open Policy Agent (https://www.openpolicyagent.org/)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
The dbt project has become overwhelmingly popular across analytics and data engineering teams. While it is easy to adopt, there are many potential pitfalls. Dustin Dorsey and Cameron Cyr co-authored a practical guide to building your dbt project. In this episode they share their hard-won wisdom about how to build and scale your dbt projects.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Data projects are notoriously complex. With multiple stakeholders to manage across varying backgrounds and toolchains even simple reports can become unwieldy to maintain. Miro is your single pane of glass where everyone can discover, track, and collaborate on your organization's data. I especially like the ability to combine your technical diagrams with data documentation and dependency mapping, allowing your data engineers and data consumers to communicate seamlessly about your projects. Find simplicity in your most complex projects with Miro. Your first three Miro boards are free when you sign up today at dataengineeringpodcast.com/miro (https://www.dataengineeringpodcast.com/miro). That’s three free boards at dataengineeringpodcast.com/miro (https://www.dataengineeringpodcast.com/miro).
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
Your host is Tobias Macey and today I'm interviewing Dustin Dorsey and Cameron Cyr about how to design your dbt projects
Interview
Introduction
How did you get involved in the area of data management?
What was your path to adoption of dbt?
What did you use prior to its existence?
When/why/how did you start using it?
What are some of the common challenges that teams experience when getting started with dbt?
How does prior experience in analytics and/or software engineering impact those outcomes?
You recently wrote a book to give a crash course in best practices for dbt. What motivated you to invest that time and effort?
What new lessons did you learn about dbt in the process of writing the book?
The introduction of dbt is largely responsible for catalyzing the growth of "analytics engineering". As practitioners in the space, what do you see as the net result of that trend?
What are the lessons that we all need to invest in independent of the tool?
For someone starting a new dbt project today, can you talk through the decisions that will be most critical for ensuring future success?
As dbt projects scale, what are the elements of technical debt that are most likely to slow down engineers?
What are the capabilities in the dbt framework that can be used to mitigate the effects of that debt?
What tools or processes outside of dbt can help alleviate the incidental complexity of a large dbt project?
What are the most interesting, innovative, or unexpected ways that you have seen dbt used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working with dbt? (as engineers and/or as autors)
What is on your personal wish-list for the future of dbt (or its competition?)?
Contact Info
Dustin
LinkedIn (https://www.linkedin.com/in/dustindorsey/)
Cameron
LinkedIn (https://www.linkedin.com/in/cameron-cyr/)
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
Biobot Analytic (https://biobot.io/)
Breezeway (https://www.breezeway.io/)
dbt (https://www.getdbt.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/dbt-data-analytics-episode-81/)
Synapse Analytics (https://azure.microsoft.com/en-us/products/synapse-analytics/)
Snowflake (https://azure.microsoft.com/en-us/products/synapse-analytics/)
Podcast Episode (https://www.dataengineeringpodcast.com/snowflakedb-cloud-data-warehouse-episode-110/)
Fivetran (https://www.fivetran.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/fivetran-data-replication-episode-93/)
Analytics Power Hour (https://analyticshour.io/)
DDL == Data Definition Language (https://en.wikipedia.org/wiki/Data_definition_language)
DML == Data Manipulation Language (https://en.wikipedia.org/wiki/Data_manipulation_language)
dbt codegen (https://github.com/dbt-labs/dbt-codegen)
Unlocking dbt (https://amzn.to/49BhACq) book (affiliate link)
dbt Mesh (https://www.getdbt.com/product/dbt-mesh)
dbt Semantic Layer (https://www.getdbt.com/product/semantic-layer)
GitHub Actions (https://github.com/features/actions)
Metaplane (https://www.metaplane.dev/)
Podcast Episode (https://www.dataengineeringpodcast.com/metaplane-data-observability-platform-episode-253/)
DataTune Conference (https://www.datatuneconf.com/)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
Software development involves an interesting balance of creativity and repetition of patterns. Generative AI has accelerated the ability of developer tools to provide useful suggestions that speed up the work of engineers. Tabnine is one of the main platforms offering an AI powered assistant for software engineers. In this episode Eran Yahav shares the journey that he has taken in building this product and the ways that it enhances the ability of humans to get their work done, and when the humans have to adapt to the tool.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold)
Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
Your host is Tobias Macey and today I'm interviewing Eran Yahav about building an AI powered developer assistant at Tabnine
Interview
Introduction
How did you get involved in machine learning?
Can you describe what Tabnine is and the story behind it?
What are the individual and organizational motivations for using AI to generate code?
What are the real-world limitations of generative AI for creating software? (e.g. size/complexity of the outputs, naming conventions, etc.)
What are the elements of skepticism/oversight that developers need to exercise while using a system like Tabnine?
What are some of the primary ways that developers interact with Tabnine during their development workflow?
Are there any particular styles of software for which an AI is more appropriate/capable? (e.g. webapps vs. data pipelines vs. exploratory analysis, etc.)
For natural languages there is a strong bias toward English in the current generation of LLMs. How does that translate into computer languages? (e.g. Python, Java, C++, etc.)
Can you describe the structure and implementation of Tabnine?
Do you rely primarily on a single core model, or do you have multiple models with subspecialization?
How have the design and goals of the product changed since you first started working on it?
What are the biggest challenges in building a custom LLM for code?
What are the opportunities for specialization of the model architecture given the highly structured nature of the problem domain?
For users of Tabnine, how do you assess/monitor the accuracy of recommendations?
What are the feedback and reinforcement mechanisms for the model(s)?
What are the most interesting, innovative, or unexpected ways that you have seen Tabnine's LLM powered coding assistant used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI assisted development at Tabnine?
When is an AI developer assistant the wrong choice?
What do you have planned for the future of Tabnine?
Contact Info
LinkedIn (https://www.linkedin.com/in/eranyahav/?originalSubdomain=il)
Website (https://csaws.cs.technion.ac.il/~yahave/)
Parting Question
From your perspective, what is the biggest barrier to adoption of machine learning today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
TabNine (https://www.tabnine.com/)
Technion University (https://www.technion.ac.il/en/home-2/)
Program Synthesis (https://en.wikipedia.org/wiki/Program_synthesis)
Context Stuffing (http://gptprompts.wikidot.com/context-stuffing)
Elixir (https://elixir-lang.org/)
Dependency Injection (https://en.wikipedia.org/wiki/Dependency_injection)
COBOL (https://en.wikipedia.org/wiki/COBOL)
Verilog (https://en.wikipedia.org/wiki/Verilog)
MidJourney (https://www.midjourney.com/home)
The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Summary
Databases are the core of most applications, but they are often treated as inscrutable black boxes. When an application is slow, there is a good probability that the database needs some attention. In this episode Lukas Fittl shares some hard-won wisdom about the causes and solution of many performance bottlenecks and the work that he is doing to shine some light on PostgreSQL to make it easier to understand how to keep it running smoothly.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold)
Your host is Tobias Macey and today I'm interviewing Lukas Fittl about optimizing your database performance and tips for tuning Postgres
Interview
Introduction
How did you get involved in the area of data management?
What are the different ways that database performance problems impact the business?
What are the most common contributors to performance issues?
What are the useful signals that indicate performance challenges in the database?
For a given symptom, what are the steps that you recommend for determining the proximate cause?
What are the potential negative impacts to be aware of when tuning the configuration of your database?
How does the database engine influence the methods used to identify and resolve performance challenges?
Most of the database engines that are in common use today have been around for decades. How have the lessons learned from running these systems over the years influenced the ways to think about designing new engines or evolving the ones we have today?
What are the most interesting, innovative, or unexpected ways that you have seen to address database performance?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on databases?
What are your goals for the future of database engines?
Contact Info
LinkedIn (https://www.linkedin.com/in/lfittl/)
@LukasFittl (https://twitter.com/LukasFittl) on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
PGAnalyze (https://pganalyze.com/)
Citus Data (https://www.citusdata.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/citus-data-with-ozgun-erdogan-and-craig-kerstiens-episode-13/)
ORM == Object Relational Mapper (https://en.wikipedia.org/wiki/Object%E2%80%93relational_mapping)
N+1 Query (https://docs.sentry.io/product/issues/issue-details/performance-issues/n-one-queries/)
Autovacuum (https://www.postgresql.org/docs/current/routine-vacuuming.html#AUTOVACUUM)
Write-ahead Log (https://en.wikipedia.org/wiki/Write-ahead_logging)
pgstatio (https://pgpedia.info/p/pg_stat_io.html)
randompagecost (https://postgresqlco.nf/doc/en/param/random_page_cost/)
pgvector (https://github.com/pgvector/pgvector)
Vector Database (https://en.wikipedia.org/wiki/Vector_database)
Ottertune (https://ottertune.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/ottertune-database-performance-optimization-episode-197/)
Citus Extension (https://github.com/citusdata/citus)
Hydra (https://github.com/hydradatabase/hydra)
Clickhouse (https://clickhouse.tech/)
Podcast Episode (https://www.dataengineeringpodcast.com/clickhouse-data-warehouse-episode-88/)
MyISAM (https://en.wikipedia.org/wiki/MyISAM)
MyRocks (http://myrocks.io/)
InnoDB (https://en.wikipedia.org/wiki/InnoDB)
Great Expectations (https://greatexpectations.io/)
Podcast Episode (https://www.dataengineeringpodcast.com/great-expectations-data-contracts-episode-352)
OpenTelemetry (https://opentelemetry.io/)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
Databases are the core of most applications, whether transactional or analytical. In recent years the selection of database products has exploded, making the critical decision of which engine(s) to use even more difficult. In this episode Tanya Bragin shares her experiences as a product manager for two major vendors and the lessons that she has learned about how teams should approach the process of tool selection.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold)
Data projects are notoriously complex. With multiple stakeholders to manage across varying backgrounds and toolchains even simple reports can become unwieldy to maintain. Miro is your single pane of glass where everyone can discover, track, and collaborate on your organization's data. I especially like the ability to combine your technical diagrams with data documentation and dependency mapping, allowing your data engineers and data consumers to communicate seamlessly about your projects. Find simplicity in your most complex projects with Miro. Your first three Miro boards are free when you sign up today at dataengineeringpodcast.com/miro (https://www.dataengineeringpodcast.com/miro). That’s three free boards at dataengineeringpodcast.com/miro (https://www.dataengineeringpodcast.com/miro).
Your host is Tobias Macey and today I'm interviewing Tanya Bragin about her views on the database products market
Interview
Introduction
How did you get involved in the area of data management?
What are the aspects of the database market that keep you interested as a VP of product?
How have your experiences at Elastic informed your current work at Clickhouse?
What are the main product categories for databases today?
What are the industry trends that have the most impact on the development and growth of different product categories?
Which categories do you see growing the fastest?
When a team is selecting a database technology for a given task, what are the types of questions that they should be asking?
Transactional engines like Postgres, SQL Server, Oracle, etc. were long used as analytical databases as well. What is driving the broad adoption of columnar stores as a separate environment from transactional systems?
What are the inefficiencies/complexities that this introduces?
How can the database engine used for analytical systems work more closely with the transactional systems?
When building analytical systems there are numerous moving parts with intricate dependencies. What is the role of the database in simplifying observability of these applications?
What are the most interesting, innovative, or unexpected ways that you have seen Clickhouse used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on database products?
What are your prodictions for the future of the database market?
Contact Info
LinkedIn (https://www.linkedin.com/in/tbragin/)
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
Clickhouse (https://clickhouse.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/clickhouse-data-warehouse-episode-88/)
Elastic (https://www.elastic.co/)
OLAP (https://en.wikipedia.org/wiki/Online_analytical_processing)
OLTP (https://en.wikipedia.org/wiki/Online_transaction_processing)
Graph Database (https://en.wikipedia.org/wiki/Graph_database)
Vector Database (https://en.wikipedia.org/wiki/Vector_database)
Trino (https://trino.io/)
Presto (https://prestodb.io/)
Foreign data wrapper (https://wiki.postgresql.org/wiki/Foreign_data_wrappers)
dbt (https://www.getdbt.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/dbt-data-analytics-episode-81/)
OpenTelemetry (https://opentelemetry.io/)
Iceberg (https://iceberg.apache.org/)
Podcast Episode (https://www.dataengineeringpodcast.com/tabular-iceberg-lakehouse-tables-episode-363)
Parquet (https://parquet.apache.org/)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
The primary application of data has moved beyond analytics. With the broader audience comes the need to present data in a more approachable format. This has led to the broad adoption of data products being the delivery mechanism for information. In this episode Ranjith Raghunath shares his thoughts on how to build a strategy for the development, delivery, and evolution of data products.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
As more people start using AI for projects, two things are clear: It’s a rapidly advancing field, but it’s tough to navigate. How can you get the best results for your use case? Instead of being subjected to a bunch of buzzword bingo, hear directly from pioneers in the developer and data science space on how they use graph tech to build AI-powered apps. . Attend the dev and ML talks at NODES 2023, a free online conference on October 26 featuring some of the brightest minds in tech. Check out the agenda and register today at Neo4j.com/NODES (https://Neo4j.com/NODES).
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold)
Your host is Tobias Macey and today I'm interviewing Ranjith Raghunath about tactical elements of a data product strategy
Interview
Introduction
How did you get involved in the area of data management?
Can you describe what is encompassed by the idea of a data product strategy?
Which roles in an organization need to be involved in the planning and implementation of that strategy?
order of operations:
strategy -> platform design -> implementation/adoption
platform implementation -> product strategy -> interface development
managing grain of data in products
team organization to support product development/deployment
customer communications - what questions to ask? requirements gathering, helping to understand "the art of the possible"
What are the most interesting, innovative, or unexpected ways that you have seen organizations approach data product strategies?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on defining and implementing data product strategies?
When is a data product strategy overkill?
What are some additional resources that you recommend for listeners to direct their thinking and learning about data product strategy?
Contact Info
LinkedIn (https://www.linkedin.com/in/ranjith-raghunath/)
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
CXData Labs (https://www.cxdatalabs.com/)
Dimensional Modeling (https://en.wikipedia.org/wiki/Dimensional_modeling)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
Building streaming applications has gotten substantially easier over the past several years. Despite this, it is still operationally challenging to deploy and maintain your own stream processing infrastructure. Decodable was built with a mission of eliminating all of the painful aspects of developing and deploying stream processing systems for engineering teams. In this episode Eric Sammer discusses why more companies are including real-time capabilities in their products and the ways that Decodable makes it faster and easier.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold)
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
As more people start using AI for projects, two things are clear: It’s a rapidly advancing field, but it’s tough to navigate. How can you get the best results for your use case? Instead of being subjected to a bunch of buzzword bingo, hear directly from pioneers in the developer and data science space on how they use graph tech to build AI-powered apps. . Attend the dev and ML talks at NODES 2023, a free online conference on October 26 featuring some of the brightest minds in tech. Check out the agenda and register today at Neo4j.com/NODES (https://Neo4j.com/NODES).
Your host is Tobias Macey and today I'm interviewing Eric Sammer about starting your stream processing journey with Decodable
Interview
Introduction
How did you get involved in the area of data management?
Can you describe what Decodable is and the story behind it?
What are the notable changes to the Decodable platform since we last spoke? (October 2021)
What are the industry shifts that have influenced the product direction?
What are the problems that customers are trying to solve when they come to Decodable?
When you launched your focus was on SQL transformations of streaming data. What was the process for adding full Java support in addition to SQL?
What are the developer experience challenges that are particular to working with streaming data?
How have you worked to address that in the Decodable platform and interfaces?
As you evolve the technical and product direction, what is your heuristic for balancing the unification of interfaces and system integration against the ability to swap different components or interfaces as new technologies are introduced?
What are the most interesting, innovative, or unexpected ways that you have seen Decodable used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Decodable?
When is Decodable the wrong choice?
What do you have planned for the future of Decodable?
Contact Info
esammer (https://github.com/esammer) on GitHub
LinkedIn (https://www.linkedin.com/in/esammer/)
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
Decodable (https://www.decodable.co/)
Podcast Episode (https://www.dataengineeringpodcast.com/decodable-streaming-data-pipelines-sql-episode-233/)
Understanding the Apache Flink Journey (https://www.decodable.co/blog/understanding-the-apache-flink-journey?utm_source=podcast&utm_medium=paid&utm_campaign=data_engineering_podcast&utm_content=understanding_the_flink_journey)
Flink (https://flink.apache.org/)
Podcast Episode (https://www.dataengineeringpodcast.com/apache-flink-with-fabian-hueske-episode-57/)
Debezium (https://debezium.io/)
Podcast Episode (https://www.dataengineeringpodcast.com/debezium-change-data-capture-episode-114/)
Kafka (https://kafka.apache.org/)
Redpanda (https://redpanda.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/vectorized-red-panda-streaming-data-episode-152/)
Kinesis (https://aws.amazon.com/kinesis/)
PostgreSQL (https://www.postgresql.org/)
Podcast Episode (https://www.dataengineeringpodcast.com/postgresql-with-jonathan-katz-episode-42/)
Snowflake (https://www.snowflake.com/en/)
Podcast Episode (https://www.dataengineeringpodcast.com/snowflakedb-cloud-data-warehouse-episode-110/)
Databricks (https://www.databricks.com/)
Startree (https://startree.ai/)
Pinot (https://pinot.apache.org/)
Podcast Episode (https://www.dataengineeringpodcast.com/pinot-embedded-analytics-episode-273/)
Rockset (https://rockset.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/rockset-serverless-analytics-episode-101/)
Druid (https://druid.apache.org/)
InfluxDB (https://www.influxdata.com/)
Samza (https://samza.apache.org/)
Storm (https://storm.apache.org/)
Pulsar (https://pulsar.apache.org/)
Podcast Episode (https://www.dataengineeringpodcast.com/pulsar-fast-and-scalable-messaging-with-rajan-dhabalia-and-matteo-merli-episode-17)
ksqlDB (https://ksqldb.io/)
Podcast Episode (https://www.dataengineeringpodcast.com/ksqldb-kafka-stream-processing-episode-122/)
dbt (https://www.getdbt.com/)
GitHub Actions (https://github.com/features/actions)
Airbyte (https://airbyte.com/)
Singer (https://www.singer.io/)
Splunk (https://www.splunk.com/)
Outbox Pattern (https://debezium.io/blog/2019/02/19/reliable-microservices-data-exchange-with-the-outbox-pattern/)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
The insurance industry is notoriously opaque and hard to navigate. Max Cho found that fact frustrating enough that he decided to build a business of making policy selection more navigable. In this episode he shares his journey of data collection and analysis and the challenges of automating an intentionally manual industry.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold)
As more people start using AI for projects, two things are clear: It’s a rapidly advancing field, but it’s tough to navigate. How can you get the best results for your use case? Instead of being subjected to a bunch of buzzword bingo, hear directly from pioneers in the developer and data science space on how they use graph tech to build AI-powered apps. . Attend the dev and ML talks at NODES 2023, a free online conference on October 26 featuring some of the brightest minds in tech. Check out the agenda and register today at Neo4j.com/NODES (https://Neo4j.com/NODES).
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
Your host is Tobias Macey and today I'm interviewing Max Cho about the wild world of insurance companies and the challenges of collecting quality data for this opaque industry
Interview
Introduction
How did you get involved in the area of data management?
Can you describe what CoverageCat is and the story behind it?
What are the different sources of data that you work with?
What are the most challenging aspects of collecting that data?
Can you describe the formats and characteristics (3 Vs) of that data?
What are some of the ways that the operational model of insurance companies have contributed to its opacity as an industry from a data perspective?
Can you describe how you have architected your data platform?
How have the design and goals changed since you first started working on it?
What are you optimizing for in your selection and implementation process?
What are the sharp edges/weak points that you worry about in your existing data flows?
How do you guard against those flaws in your day-to-day operations?
What are the most interesting, innovative, or unexpected ways that you have seen your data sets used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on insurance industry data?
When is a purely statistical view of insurance the wrong approach?
What do you have planned for the future of CoverageCat's data stack?
Contact Info
LinkedIn (https://www.linkedin.com/in/maxrcho/)
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
CoverageCat (https://www.coveragecat.com/)
Actuarial Model (https://en.wikipedia.org/wiki/Actuarial_science)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
Artificial intelligence applications require substantial high quality data, which is provided through ETL pipelines. Now that AI has reached the level of sophistication seen in the various generative models it is being used to build new ETL workflows. In this episode Jay Mishra shares his experiences and insights building ETL pipelines with the help of generative AI.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold)
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
As more people start using AI for projects, two things are clear: It’s a rapidly advancing field, but it’s tough to navigate. How can you get the best results for your use case? Instead of being subjected to a bunch of buzzword bingo, hear directly from pioneers in the developer and data science space on how they use graph tech to build AI-powered apps. . Attend the dev and ML talks at NODES 2023, a free online conference on October 26 featuring some of the brightest minds in tech. Check out the agenda and register at Neo4j.com/NODES (https://neo4j.com/nodes).
Your host is Tobias Macey and today I'm interviewing Jay Mishra about the applications for generative AI in the ETL process
Interview
Introduction
How did you get involved in the area of data management?
What are the different aspects/types of ETL that you are seeing generative AI applied to?
What kind of impact are you seeing in terms of time spent/quality of output/etc.?
What kinds of projects are most likely to benefit from the application of generative AI?
Can you describe what a typical workflow of using AI to build ETL workflows looks like?
What are some of the types of errors that you are likely to experience from the AI?
Once the pipeline is defined, what does the ongoing maintenance look like?
Is the AI required to operate within the pipeline in perpetuity?
For individuals/teams/organizations who are experimenting with AI in their data engineering workflows, what are the concerns/questions that they are trying to address?
What are the most interesting, innovative, or unexpected ways that you have seen generative AI used in ETL workflows?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on ETL and generative AI?
When is AI the wrong choice for ETL applications?
What are your predictions for future applications of AI in ETL and other data engineering practices?
Contact Info
LinkedIn (https://www.linkedin.com/in/jaymishra/)
@MishraJay (https://twitter.com/MishraJay) on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
Astera (https://www.astera.com/)
Data Vault (https://en.wikipedia.org/wiki/Data_vault_modeling)
Star Schema (https://en.wikipedia.org/wiki/Star_schema)
OpenAI (https://openai.com/)
GPT == Generative Pre-trained Transformer (https://en.wikipedia.org/wiki/Generative_pre-trained_transformer)
Entity Resolution (https://en.wikipedia.org/wiki/Record_linkage)
LLAMA (https://en.wikipedia.org/wiki/LLaMA)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
The rapid growth of machine learning, especially large language models, have led to a commensurate growth in the need to store and compare vectors. In this episode Louis Brandy discusses the applications for vector search capabilities both in and outside of AI, as well as the challenges of maintaining real-time indexes of vector data.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold)
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
If you’re a data person, you probably have to jump between different tools to run queries, build visualizations, write Python, and send around a lot of spreadsheets and CSV files. Hex brings everything together. Its powerful notebook UI lets you analyze data in SQL, Python, or no-code, in any combination, and work together with live multiplayer and version control. And now, Hex’s magical AI tools can generate queries and code, create visualizations, and even kickstart a whole analysis for you – all from natural language prompts. It’s like having an analytics co-pilot built right into where you’re already doing your work. Then, when you’re ready to share, you can use Hex’s drag-and-drop app builder to configure beautiful reports or dashboards that anyone can use. Join the hundreds of data teams like Notion, AllTrails, Loom, Mixpanel and Algolia using Hex every day to make their work more impactful. Sign up today at dataengineeringpodcast.com/hex (https://www.dataengineeringpodcast.com/hex) to get a 30-day free trial of the Hex Team plan!
Your host is Tobias Macey and today I'm interviewing Louis Brandy about building vector indexes in real-time for analytics and AI applications
Interview
Introduction
How did you get involved in the area of data management?
Can you describe what vector search is and how it differs from other search technologies?
What are the technical challenges related to providing vector search?
What are the applications for vector search that merit the added complexity?
Vector databases have been gaining a lot of attention recently with the proliferation of LLM applications. Is a dedicated database technology required to support vector indexes/vector search queries?
What are the use cases for native vector data types that are separate from AI?
With the increasing usage of vectors for data and AI/ML applications, who do you typically see as the owner of that problem space? (e.g. data engineers, ML engineers, data scientists, etc.)
For teams who are investing in vector search, what are the architectural considerations that they need to be aware of?
How does it impact the data pipeline strategies/topologies used?
What are the complexities that need to be addressed when updating vector data in a real-time/streaming fashion?
How does that influence the client strategies that are querying that data?
What are the most interesting, innovative, or unexpected ways that you have seen vector search used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on vector search applications?
When is vector search the wrong choice?
What do you see as future potential applications for vector indexes/vector search?
Contact Info
LinkedIn (https://www.linkedin.com/in/lbrandy/)
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
Rockset (https://rockset.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/rockset-serverless-analytics-episode-101/)
Vector Index (https://www.datastax.com/guides/what-is-a-vector-index)
Vector Search (https://www.datastax.com/guides/what-is-vector-search)
Rockset Implementation Explanation (https://rockset.com/videos/vector-search-architecture/)
Vector Space (https://en.wikipedia.org/wiki/Vector_space)
Euclidean Distance (https://en.wikipedia.org/wiki/Euclidean_distance)
OLAP == Online Analytical Processing (https://en.wikipedia.org/wiki/Online_analytical_processing)
OLTP == Online Transaction Processing (https://en.wikipedia.org/wiki/Online_transaction_processing)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
A significant amount of time in data engineering is dedicated to building connections and semantic meaning around pieces of information. Linked data technologies provide a means of tightly coupling metadata with raw information. In this episode Brian Platz explains how JSON-LD can be used as a shared representation of linked data for building semantic data products.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold)
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
If you’re a data person, you probably have to jump between different tools to run queries, build visualizations, write Python, and send around a lot of spreadsheets and CSV files. Hex brings everything together. Its powerful notebook UI lets you analyze data in SQL, Python, or no-code, in any combination, and work together with live multiplayer and version control. And now, Hex’s magical AI tools can generate queries and code, create visualizations, and even kickstart a whole analysis for you – all from natural language prompts. It’s like having an analytics co-pilot built right into where you’re already doing your work. Then, when you’re ready to share, you can use Hex’s drag-and-drop app builder to configure beautiful reports or dashboards that anyone can use. Join the hundreds of data teams like Notion, AllTrails, Loom, Mixpanel and Algolia using Hex every day to make their work more impactful. Sign up today at dataengineeringpodcast.com/hex (https://www.dataengineeringpodcast.com/hex) to get a 30-day free trial of the Hex Team plan!
Your host is Tobias Macey and today I'm interviewing Brian Platz about using JSON-LD for building linked-data products
Interview
Introduction
How did you get involved in the area of data management?
Can you describe what the term "linked data product" means and some examples of when you might build one?
What is the overlap between knowledge graphs and "linked data products"?
What is JSON-LD?
What are the domains in which it is typically used?
How does it assist in developing linked data products?
what are the characteristics that distinguish a knowledge graph from
What are the layers/stages of applications and data that can/should incorporate JSON-LD as the representation for records and events?
What is the level of native support/compatibiliity that you see for JSON-LD in data systems?
What are the modeling exercises that are necessary to ensure useful and appropriate linkages of different records within and between products and organizations?
Can you describe the workflow for building autonomous linkages across data assets that are modelled as JSON-LD?
What are the most interesting, innovative, or unexpected ways that you have seen JSON-LD used for data workflows?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on linked data products?
When is JSON-LD the wrong choice?
What are the future directions that you would like to see for JSON-LD and linked data in the data ecosystem?
Contact Info
LinkedIn (https://www.linkedin.com/in/brianplatz/)
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
Fluree (https://flur.ee/)
JSON-LD (https://json-ld.org/)
Knowledge Graph (https://en.wikipedia.org/wiki/Knowledge_graph)
Adjacency List (https://en.wikipedia.org/wiki/Adjacency_list)
RDF == Resource Description Framework (https://www.w3.org/RDF/)
Semantic Web (https://en.wikipedia.org/wiki/Semantic_Web)
Open Graph (https://ogp.me/)
Schema.org (https://schema.org/)
RDF Triple (https://en.wikipedia.org/wiki/Semantic_triple)
IDMP == Identification of Medicinal Products (https://www.fda.gov/industry/fda-data-standards-advisory-board/identification-medicinal-products-idmp)
FIBO == Financial Industry Business Ontology (https://spec.edmcouncil.org/fibo/)
OWL Standard (https://www.w3.org/OWL/)
NP-Hard (https://en.wikipedia.org/wiki/NP-hardness)
Forward-Chaining Rules (https://en.wikipedia.org/wiki/Forward_chaining)
SHACL == Shapes Constraint Language) (https://www.w3.org/TR/shacl/)
Zero Knowledge Cryptography (https://en.wikipedia.org/wiki/Zero-knowledge_proof)
Turtle Serialization (https://www.w3.org/TR/turtle/)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
Data systems are inherently complex and often require integration of multiple technologies. Orchestrators are centralized utilities that control the execution and sequencing of interdependent operations. This offers a single location for managing visibility and error handling so that data platform engineers can manage complexity. In this episode Nick Schrock, creator of Dagster, shares his perspective on the state of data orchestration technology and its application to help inform its implementation in your environment.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold)
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
Your host is Tobias Macey and today I'm welcoming back Nick Schrock to talk about the state of the ecosystem for data orchestration
Interview
Introduction
How did you get involved in the area of data management?
Can you start by defining what data orchestration is and how it differs from other types of orchestration systems? (e.g. container orchestration, generalized workflow orchestration, etc.)
What are the misconceptions about the applications of/need for/cost to implement data orchestration?
How do those challenges of customer education change across roles/personas?
Because of the multi-faceted nature of data in an organization, how does that influence the capabilities and interfaces that are needed in an orchestration engine?
You have been working on Dagster for five years now. How have the requirements/adoption/application for orchestrators changed in that time?
One of the challenges for any orchestration engine is to balance the need for robust and extensible core capabilities with a rich suite of integrations to the broader data ecosystem. What are the factors that you have seen make the most influence in driving adoption of a given engine?
What are the most interesting, innovative, or unexpected ways that you have seen data orchestration implemented and/or used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on data orchestration?
When is a data orchestrator the wrong choice?
What do you have planned for the future of orchestration with Dagster?
Contact Info
@schrockn (https://twitter.com/schrockn) on Twitter
LinkedIn (https://www.linkedin.com/in/schrockn)
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
Dagster (https://dagster.io/)
GraphQL (https://graphql.org/)
K8s == Kubernetes (https://kubernetes.io/)
Airbyte (https://airbyte.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/airbyte-open-source-data-integration-episode-173/)
Hightouch (https://hightouch.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/hightouch-customer-data-warehouse-episode-168/)
Airflow (https://airflow.apache.org/)
Prefect (https://www.prefect.io)
Flyte (https://flyte.org/)
Podcast Episode (https://www.dataengineeringpodcast.com/flyte-data-orchestration-machine-learning-episode-291/)
dbt (https://www.getdbt.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/dbt-data-analytics-episode-81/)
DAG == Directed Acyclic Graph (https://en.wikipedia.org/wiki/Directed_acyclic_graph)
Temporal (https://temporal.io/)
Software Defined Assets (https://docs.dagster.io/concepts/assets/software-defined-assets)
DataForm (https://dataform.co/)
Gradient Flow State Of Orchestration Report 2022 (https://gradientflow.com/2022-workflow-orchestration-survey/)
MLOps Is 98% Data Engineering (https://mlops.community/mlops-is-mostly-data-engineering/)
DataHub (https://datahubproject.io/)
Podcast Episode (https://www.dataengineeringpodcast.com/datahub-metadata-management-episode-147/)
OpenMetadata (https://open-metadata.org/)
Podcast Episode (https://www.dataengineeringpodcast.com/openmetadata-universal-metadata-layer-episode-237/)
Atlan (https://atlan.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/atlan-data-team-collaboration-episode-179/)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
Cloud data warehouses and the introduction of the ELT paradigm has led to the creation of multiple options for flexible data integration, with a roughly equal distribution of commercial and open source options. The challenge is that most of those options are complex to operate and exist in their own silo. The dlt project was created to eliminate overhead and bring data integration into your full control as a library component of your overall data system. In this episode Adrian Brudaru explains how it works, the benefits that it provides over other data integration solutions, and how you can start building pipelines today.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold)
Your host is Tobias Macey and today I'm interviewing Adrian Brudaru about dlt, an open source python library for data loading
Interview
Introduction
How did you get involved in the area of data management?
Can you describe what dlt is and the story behind it?
What is the problem you want to solve with dlt?
Who is the target audience?
The obvious comparison is with systems like Singer/Meltano/Airbyte in the open source space, or Fivetran/Matillion/etc. in the commercial space. What are the complexities or limitations of those tools that leave an opening for dlt?
Can you describe how dlt is implemented?
What are the benefits of building it in Python?
How have the design and goals of the project changed since you first started working on it?
How does that language choice influence the performance and scaling characteristics?
What problems do users solve with dlt?
What are the interfaces available for extending/customizing/integrating with dlt?
Can you talk through the process of adding a new source/destination?
What is the workflow for someone building a pipeline with dlt?
How does the experience scale when supporting multiple connections?
Given the limited scope of extract and load, and the composable design of dlt it seems like a purpose built companion to dbt (down to the naming). What are the benefits of using those tools in combination?
What are the most interesting, innovative, or unexpected ways that you have seen dlt used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on dlt?
When is dlt the wrong choice?
What do you have planned for the future of dlt?
Contact Info
LinkedIn (https://www.linkedin.com/in/data-team/?originalSubdomain=de)
Join our community to discuss further (https://join.slack.com/t/dlthub-community/shared_invite/zt-1slox199h-HAE7EQoXmstkP_bTqal65g)
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
dlt (https://dlthub.com/)
Harness Success Story (https://dlthub.com/success-stories/harness/)
Our guiding product principles (https://dlthub.com/product/)
Ecosystem support (https://dlthub.com/docs/dlt-ecosystem)
From basic to complex, dlt has many capabilities (https://dlthub.com/docs/getting-started/build-a-data-pipeline)
Singer (https://www.singer.io/)
Airbyte (https://airbyte.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/airbyte-open-source-data-integration-episode-173/)
Meltano (https://meltano.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/meltano-data-integration-episode-141/)
Matillion (https://www.matillion.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/matillion-cloud-data-integration-episode-286/)
Fivetran (https://www.fivetran.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/fivetran-data-replication-episode-93/)
DuckDB (https://duckdb.org/)
Podcast Episode (https://www.dataengineeringpodcast.com/duckdb-in-process-olap-database-episode-270/)
OpenAPI (https://www.openapis.org/)
Data Mesh (https://martinfowler.com/articles/data-monolith-to-mesh.html)
Podcast Episode (https://www.dataengineeringpodcast.com/data-mesh-revisited-episode-250/)
SQLMesh (https://sqlmesh.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/sqlmesh-open-source-dataops-episode-380)
Airflow (https://airflow.apache.org/)
Dagster (https://dagster.io/)
Podcast Episode (https://www.dataengineeringpodcast.com/dagster-data-platform-big-complexity-episode-239/)
Prefect (https://www.prefect.io/)
Podcast Episode (https://www.dataengineeringpodcast.com/prefect-workflow-engine-episode-86/)
Alto (https://github.com/z3z1ma/alto)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
Data persistence is one of the most challenging aspects of computer systems. In the era of the cloud most developers rely on hosted services to manage their databases, but what if you are a cloud service? In this episode Vignesh Ravichandran explains how his team at Cloudflare provides PostgreSQL as a service to their developers for low latency and high uptime services at global scale. This is an interesting and insightful look at pragmatic engineering for reliability and scale.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold)
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
Your host is Tobias Macey and today I'm interviewing Vignesh Ravichandran about building an internal database as a service platform at Cloudflare
Interview
Introduction
How did you get involved in the area of data management?
Can you start by describing the different database workloads that you have at Cloudflare?
What are the different methods that you have used for managing database instances?
What are the requirements and constraints that you had to account for in designing your current system?
Why Postgres?
optimizations for Postgres
simplification from not supporting multiple engines
limitations in postgres that make multi-tenancy challenging
scale of operation (data volume, request rate
What are the most interesting, innovative, or unexpected ways that you have seen your DBaaS used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on your internal database platform?
When is an internal database as a service the wrong choice?
What do you have planned for the future of Postgres hosting at Cloudflare?
Contact Info
LinkedIn (https://www.linkedin.com/in/vigneshravichandran28/)
Website (https://viggy28.dev/)
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
Cloudflare (https://www.cloudflare.com/)
PostgreSQL (https://www.postgresql.org/)
Podcast Episode (https://www.dataengineeringpodcast.com/postgresql-with-jonathan-katz-episode-42/)
IP Address Data Type in Postgres (https://www.postgresql.org/docs/current/datatype-net-types.html)
CockroachDB (https://www.cockroachlabs.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/cockroachdb-with-peter-mattis-episode-35/)
Citus (https://www.citusdata.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/citus-data-with-ozgun-erdogan-and-craig-kerstiens-episode-13/)
Yugabyte (https://www.yugabyte.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/yugabytedb-planet-scale-sql-episode-115/)
Stolon (https://github.com/sorintlab/stolon)
pg_rewind (https://www.postgresql.org/docs/current/app-pgrewind.html)
PGBouncer (https://www.pgbouncer.org/)
HAProxy Presentation (https://www.youtube.com/watch?v=HIOo4j-Tiq4)
Etcd (https://etcd.io/)
Patroni (https://patroni.readthedocs.io/en/latest/)
pg_upgrade (https://www.postgresql.org/docs/current/pgupgrade.html)
Edge Computing (https://en.wikipedia.org/wiki/Edge_computing)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
Generative AI has unlocked a massive opportunity for content creation. There is also an unfulfilled need for experts to be able to share their knowledge and build communities. Illumidesk was built to take advantage of this intersection. In this episode Greg Werner explains how they are using generative AI as an assistive tool for creating educational material, as well as building a data driven experience for learners.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold)
You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free!
Your host is Tobias Macey and today I'm interviewing Greg Werner about building IllumiDesk, a data-driven and AI powered online learning platform
Interview
Introduction
How did you get involved in the area of data management?
Can you describe what Illumidesk is and the story behind it?
What are the challenges that educators and content creators face in developing and maintaining digital course materials for their target audiences?
How are you leaning on data integrations and AI to reduce the initial time investment required to deliver courseware?
What are the opportunities for collecting and collating learner interactions with the course materials to provide feedback to the instructors?
What are some of the ways that you are incorporating pedagogical strategies into the measurement and evaluation methods that you use for reports?
What are the different categories of insights that you need to provide across the different stakeholders/personas who are interacting with the platform and learning content?
Can you describe how you have architected the Illumidesk platform?
How have the design and goals shifted since you first began working on it?
What are the strategies that you have used to allow for evolution and adaptation of the system in order to keep pace with the ecosystem of generative AI capabilities?
What are the failure modes of the content generation that you need to account for?
What are the most interesting, innovative, or unexpected ways that you have seen Illumidesk used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Illumidesk?
When is Illumidesk the wrong choice?
What do you have planned for the future of Illumidesk?
Contact Info
LinkedIn (https://www.linkedin.com/in/wernergreg/)
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
Illumidesk (https://www.illumidesk.com/)
Generative AI (https://en.wikipedia.org/wiki/Generative_artificial_intelligence)
Vector Database (https://www.pinecone.io/learn/vector-database/)
LTI == Learning Tools Interoperability (https://en.wikipedia.org/wiki/Learning_Tools_Interoperability)
SCORM (https://scorm.com/scorm-explained/)
XAPI (https://xapi.com/overview/)
Prompt Engineering (https://en.wikipedia.org/wiki/Prompt_engineering)
GPT-4 (https://en.wikipedia.org/wiki/GPT-4)
LLama (https://en.wikipedia.org/wiki/LLaMA)
Anthropic (https://www.anthropic.com/)
FastAPI (https://fastapi.tiangolo.com/)
LangChain (https://www.langchain.com/)
Celery (https://docs.celeryq.dev/en/stable/)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
Data pipelines are the core of every data product, ML model, and business intelligence dashboard. If you're not careful you will end up spending all of your time on maintenance and fire-fighting. The folks at Rivery distilled the seven principles of modern data pipelines that will help you stay out of trouble and be productive with your data. In this episode Ariel Pohoryles explains what they are and how they work together to increase your chances of success.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold)
Your host is Tobias Macey and today I'm interviewing Ariel Pohoryles about the seven principles of modern data pipelines
Interview
Introduction
How did you get involved in the area of data management?
Can you start by defining what you mean by a "modern" data pipeline?
At Rivery you published a white paper identifying seven principles of modern data pipelines:
Zero infrastructure management
ELT-first mindset
Speaks SQL and Python
Dynamic multi-storage layers
Reverse ETL & operational analytics
Full transparency
Faster time to value
What are the applications of data that you focused on while identifying these principles?
How do the application of these principles influence the ability of organizations and their data teams to encourage and keep pace with the use of data in the business?
What are the technical components of a pipeline infrastructure that are necessary to support a "modern" workflow?
How do the technologies involved impact the organizational involvement with how data is applied throughout the business?
When using managed services, what are the ways that the pricing model acts to encourage/discourage experimentation/exploration with data?
What are the most interesting, innovative, or unexpected ways that you have seen these seven principles implemented/applied?
What are the most interesting, unexpected, or challenging lessons that you have learned while working with customers to adapt to these principles?
What are the cases where some/all of these principles are undesirable/impractical to implement?
What are the opportunities for further advancement/sophistication in the ways that teams work with and gain value from data?
Contact Info
LinkedIn (https://www.linkedin.com/in/ariel-pohoryles-88695622/)
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
Rivery (https://rivery.io/)
7 Principles Of The Modern Data Pipeline (https://rivery.io/downloads/7-principles-modern-data-pipeline-lp/)
ELT (https://en.wikipedia.org/wiki/Extract,_load,_transform)
Reverse ETL (https://rivery.io/blog/what-is-reverse-etl-guide-for-data-teams/)
Martech Landscape (https://chiefmartec.com/2023/05/2023-marketing-technology-landscape-supergraphic-11038-solutions-searchable-on-martechmap-com/)
Data Lakehouse (https://www.forbes.com/sites/bernardmarr/2022/01/18/what-is-a-data-lakehouse-a-super-simple-explanation-for-anyone/?sh=54d5c4916088)
Databricks (https://www.databricks.com/)
Snowflake (https://www.snowflake.com/en/)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
As businesses increasingly invest in technology and talent focused on data engineering and analytics, they want to know whether they are benefiting. So how do you calculate the return on investment for data? In this episode Barr Moses and Anna Filippova explore that question and provide useful exercises to start answering that in your company.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
Your host is Tobias Macey and today I'm interviewing Barr Moses and Anna Filippova about how and whether to measure the ROI of your data team
Interview
Introduction
How did you get involved in the area of data management?
What are the typical motivations for measuring and tracking the ROI for a data team?
Who is responsible for collecting that information?
How is that information used and by whom?
What are some of the downsides/risks of tracking this metric? (law of unintended consequences)
What are the inputs to the number that constitutes the "investment"? infrastructure, payroll of employees on team, time spent working with other teams?
What are the aspects of data work and its impact on the business that complicate a calculation of the "return" that is generated?
How should teams think about measuring data team ROI?
What are some concrete ROI metrics data teams can use?
What level of detail is useful? What dimensions should be used for segmenting the calculations?
How can visibility into this ROI metric be best used to inform the priorities and project scopes of the team?
With so many tools in the modern data stack today, what is the role of technology in helping drive or measure this impact?
How do your respective solutions, Monte Carlo and dbt, help teams measure and scale data value?
With generative AI on the upswing of the hype cycle, what are the impacts that you see it having on data teams?
What are the unrealistic expectations that it will produce?
How can it speed up time to delivery?
What are the most interesting, innovative, or unexpected ways that you have seen data team ROI calculated and/or used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on measuring the ROI of data teams?
When is measuring ROI the wrong choice?
Contact Info
Barr
LinkedIn (https://www.linkedin.com/in/barrmoses/)
Anna
LinkedIn (https://www.linkedin.com/in/annafilippova)
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
Monte Carlo (https://www.montecarlodata.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/monte-carlo-observability-data-quality-episode-155)
dbt (https://www.getdbt.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/dbt-data-analytics-episode-81)
JetBlue Snowflake Con Presentation (https://www.snowflake.com/webinar/thought-leadership/jet-blue-and-monte-carlos/)
Generative AI (https://generativeai.net/)
Large Language Models (https://en.wikipedia.org/wiki/Large_language_model)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary
All software systems are in a constant state of evolution. This makes it impossible to select a truly future-proof technology stack for your data platform, making an eventual migration inevitable. In this episode Gleb Mezhanskiy and Rob Goretsky share their experiences leading various data platform migrations, and the hard-won lessons that they learned so that you don't have to.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack)
Modern data teams are using Hex to 10x their data impact. Hex combines a notebook style UI with an interactive report builder. This allows data teams to both dive deep to find insights and then share their work in an easy-to-read format to the whole org. In Hex you can use SQL, Python, R, and no-code visualization together to explore, transform, and model data. Hex also has AI built directly into the workflow to help you generate, edit, explain and document your code. The best data teams in the world such as the ones at Notion, AngelList, and Anthropic use Hex for ad hoc investigations, creating machine learning models, and building operational dashboards for the rest of their company. Hex makes it easy for data analysts and data scientists to collaborate together and produce work that has an impact. Make your data team unstoppable with Hex. Sign up today at dataengineeringpodcast.com/hex (https://www.dataengineeringpodcast.com/hex) to get a 30-day free trial for your team!
Your host is Tobias Macey and today I'm interviewing Gleb Mezhanskiy and Rob Goretsky about when and how to think about migrating your data stack
Interview
Introduction
How did you get involved in the area of data management?
A migration can be anything from a minor task to a major undertaking. Can you start by describing what constitutes a migration for the purposes of this conversation?
Is it possible to completely avoid having to invest in a migration?
What are the signals that point to the need for a migration?
What are some of the sources of cost that need to be accounted for when considering a migration? (both in terms of doing one, and the costs of not doing one)
What are some signals that a migration is not the right solution for a perceived problem?
Once the decision has been made that a migration is necessary, what are the questions that the team should be asking to determine the technologies to move to and the sequencing of execution?
What are the preceding tasks that should be completed before starting the migration to ensure there is no breakage downstream of the changing component(s)?
What are some of the ways that a migration effort might fail?
What are the major pitfalls that teams need to be aware of as they work through a data platform migration?
What are the opportunities for automation during the migration process?
What are the most interesting, innovative, or unexpected ways that you have seen teams approach a platform migration?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on data platform migrations?
What are some ways that the technologies and patterns that we use can be evolved to reduce the cost/impact/need for migraitons?
Contact Info
Gleb
LinkedIn (https://www.linkedin.com/in/glebmezh/)
@glebmm (https://twitter.com/glebmm) on Twitter
Rob
LinkedIn (https://www.linkedin.com/in/robertgoretsky/)
RobGoretsky (https://github.com/RobGoretsky) on GitHub
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning.
Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story.
To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers
Links
Datafold (https://www.datafold.com/)
Podcast Episode (https://www.dataengineeringpodcast.com/datafold-proactive-data-quality-episode-205/)
Informatica (https://www.informatica.com/)
Airflow (https://airflow.apache.org/)
Snowflake (https://www.snowflake.com/en/)
Podcast Episode (https://www.dataengineeringpodcast.com/snowflakedb-cloud-data-warehouse-episode-110/)
Redshift (https://aws.amazon.com/redshift/)
Eventbrite (https://www.eventbrite.com/)
Teradata (https://www.teradata.com/)
BigQuery (https://cloud.google.com/bigquery)
Trino (https://trino.io/)
EMR == Elastic Map-Reduce (https://aws.amazon.com/emr/)
Shadow IT (https://en.wikipedia.org/wiki/Shadow_IT)
Podcast Episode (https://www.dataengineeringpodcast.com/shadow-it-data-analytics-episode-121)
Mode Analytics (https://mode.com/)
Looker (https://cloud.google.com/looker/)
Sunk Cost Fallacy (https://en.wikipedia.org/wiki/Sunk_cost)
data-diff (https://github.com/datafold/data-diff)
Podcast Episode (https://www.dataengineeringpodcast.com/data-diff-open-source-data-integration-validation-episode-303/)
SQLGlot (https://github.com/tobymao/sqlglot)
Dagster (dhttps://dagster.io/)
dbt (https://www.getdbt.com/)
The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
can we simply use sql :)?
Nice program.. The concept is useful to datagrids and EDA.!
It's very hard to follow your guest..