A Data Catalog For Your PyData Projects
One of the biggest pain points when working with data is getting is dealing with the boilerplate code to load it into a usable format. Intake encapsulates all of that and puts it behind a single API. In this episode Martin Durant explains how to use the Intake data catalogs for encapsulating source information, how it simplifies data science workflows, and how to incorporate it into your projects. It is a lightweight way to enable collaboration between data engineers and data scientists in the PyData ecosystem.
- Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
- When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With 200 Gbit/s private networking, scalable shared block storage, node balancers, and a 40 Gbit/s public network, all controlled by a brand new API you’ve got everything you need to scale up. And for your tasks that need fast computation, such as training machine learning models, they just launched dedicated CPU instances. Go to pythonpodcast.com/linode to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
- You listen to this show to learn and stay up to date with the ways that Python is being used, including the latest in machine learning and data analysis. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to pythonpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.
- Visit the site to subscribe to the show, sign up for the newsletter, and read the show notes. And if you have any questions, comments, or suggestions I would love to hear them. You can reach me on Twitter at @Podcast__init__ or email email@example.com)
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
- Join the community in the new Zulip chat workspace at pythonpodcast.com/chat
- Your host as usual is Tobias Macey and today I’m interviewing Martin Durant about Intake, a lightweight package for finding, investigating, loading and disseminating data
- How did you get introduced to Python?
- Can you start by explaining what Intake is and the story behind its creation?
- Can you describe the workflows for using Intake, both from the data scientist and the data engineer perspective?
- One of the persistent challenges in working with data is that of cataloging and discovery of what already exists. In what ways does Intake address that problem?
- Does it have any facilities for capturing and exposing data lineage?
- For someone who needs to customize their usage of Intake, what are the extension points and what is involved in building a plugin?
- Can you describe how Intake is implemented and how it has evolved since it first started?
- What are some of the most challenging, complex, or novel aspects of the Intake implementation?
- Intake focuses primarily on integrating with the PyData ecosystem (e.g. NumPy, Pandas, SciPy, etc.). What are some other communities that are, or could be, benefiting from the work being done on Intake?
- What are some of the assumptions that are baked into Intake that would need to be modified to make it more broadly applicable?
- What are some of the assumptions that were made going into this project that have needed to be reconsidered after digging deeper into the problem space?
- What are some of the most interesting/unexpected/innovative ways that you have seen Intake leveraged?
- What are your plans for the future of Intake?
Keep In Touch
- Ubersuggest SEO tool
- Fast Parquet
- Space Telescope Institute
- Quilt Data
- Data Retriever
- Apache Spark
- Dat Project – distributed peer-to-peer data sharing