
Author:
Subscribed: 0Played: 0Subscribe
Share
Description
Episodes
Reverse
Preamble
This is a cross-over episode from our new show The Machine Learning Podcast, the show about going from idea to production with machine learning.
Summary
The majority of machine learning projects that you read about or work on are built around batch processes. The model is trained, and then validated, and then deployed, with each step being a discrete and isolated task. Unfortunately, the real world is rarely static, leading to concept drift and model failures. River is a framework for building streaming machine learning projects that can constantly adapt to new information. In this episode Max Halford explains how the project works, why you might (or might not) want to consider streaming ML, and how to get started building with River.
Announcements
Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.
Building good ML models is hard, but testing them properly is even harder. At Deepchecks, they built an open-source testing framework that follows best practices, ensuring that your models behave as expected. Get started quickly using their built-in library of checks for testing and validating your model’s behavior and performance, and extend it to meet your specific needs as your model evolves. Accelerate your machine learning projects by building trust in your models and automating the testing that you used to do manually. Go to themachinelearningpodcast.com/deepchecks today to get started!
Your host is Tobias Macey and today I’m interviewing Max Halford about River, a Python toolkit for streaming and online machine learning
Interview
Introduction
How did you get involved in machine learning?
Can you describe what River is and the story behind it?
What is "online" machine learning?
What are the practical differences with batch ML?
Why is batch learning so predominant?
What are the cases where someone would want/need to use online or streaming ML?
The prevailing pattern for batch ML model lifecycles is to train, deploy, monitor, repeat. What does the ongoing maintenance for a streaming ML model look like?
Concept drift is typically due to a discrepancy between the data used to train a model and the actual data being observed. How does the use of online learning affect the incidence of drift?
Can you describe how the River framework is implemented?
How have the design and goals of the project changed since you started working on it?
How do the internal representations of the model differ from batch learning to allow for incremental updates to the model state?
In the documentation you note the use of Python dictionaries for state management and the flexibility offered by that choice. What are the benefits and potential pitfalls of that decision?
Can you describe the process of using River to design, implement, and validate a streaming ML model?
What are the operational requirements for deploying and serving the model once it has been developed?
What are some of the challenges that users of River might run into if they are coming from a batch learning background?
What are the most interesting, innovative, or unexpected ways that you have seen River used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on River?
When is River the wrong choice?
What do you have planned for the future of River?
Contact Info
Email
@halford_max on Twitter
MaxHalford on GitHub
Parting Question
From your perspective, what is the biggest barrier to adoption of machine learning today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
River
scikit-multiflow
Federated Machine Learning
Hogwild! Google Paper
Chip Huyen concept drift blog post
Dan Crenshaw Berkeley Clipper MLOps
Robustness Principle
NY Taxi Dataset
RiverTorch
River Public Roadmap
Beaver tool for deploying online models
Prodigy ML human in the loop labeling
The intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
Preamble
This is a cross-over episode from our new show The Machine Learning Podcast, the show about going from idea to production with machine learning.
Summary
Deep learning is a revolutionary category of machine learning that accelerates our ability to build powerful inference models. Along with that power comes a great deal of complexity in determining what neural architectures are best suited to a given task, engineering features, scaling computation, etc. Predibase is building on the successes of the Ludwig framework for declarative deep learning and Horovod for horizontally distributing model training. In this episode CTO and co-founder of Predibase, Travis Addair, explains how they are reducing the burden of model development even further with their managed service for declarative and low-code ML and how they are integrating with the growing ecosystem of solutions for the full ML lifecycle.
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great!
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Your host is Tobias Macey and today I’m interviewing Travis Addair about Predibase, a low-code platform for building ML models in a declarative format
Interview
Introduction
How did you get involved in machine learning?
Can you describe what Predibase is and the story behind it?
Who is your target audience and how does that focus influence your user experience and feature development priorities?
How would you describe the semantic differences between your chosen terminology of "declarative ML" and the "autoML" nomenclature that many projects and products have adopted?
Another platform that launched recently with a promise of "declarative ML" is Continual. How would you characterize your relative strengths?
Can you describe how the Predibase platform is implemented?
How have the design and goals of the product changed as you worked through the initial implementation and started working with early customers?
The operational aspects of the ML lifecycle are still fairly nascent. How have you thought about the boundaries for your product to avoid getting drawn into scope creep while providing a happy path to delivery?
Ludwig is a core element of your platform. What are the other capabilities that you are layering around and on top of it to build a differentiated product?
In addition to the existing interfaces for Ludwig you created a new language in the form of PQL. What was the motivation for that decision?
How did you approach the semantic and syntactic design of the dialect?
What is your vision for PQL in the space of "declarative ML" that you are working to define?
Can you describe the available workflows for an individual or team that is using Predibase for prototyping and validating an ML model?
Once a model has been deemed satisfactory, what is the path to production?
How are you approaching governance and sustainability of Ludwig and Horovod while balancing your reliance on them in Predibase?
What are some of the notable investments/improvements that you have made in Ludwig during your work of building Predibase?
What are the most interesting, innovative, or unexpected ways that you have seen Predibase used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Predibase?
When is Predibase the wrong choice?
What do you have planned for the future of Predibase?
Contact Info
LinkedIn
tgaddair on GitHub
@travisaddair on Twitter
Parting Question
From your perspective, what is the biggest barrier to adoption of machine learning today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
Predibase
Horovod
Ludwig
Podcast.__init__ Episode
Support Vector Machine
Hadoop
Tensorflow
Uber Michaelangelo
AutoML
Spark ML Lib
Deep Learning
PyTorch
Continual
Data Engineering Podcast Episode
Overton
Kubernetes
Ray
Nvidia Triton
Whylogs
Data Engineering Podcast Episode
Weights and Biases
MLFlow
Comet
Confusion Matrices
dbt
Data Engineering Podcast Episode
Torchscript
Self-supervised Learning
The intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
Preamble
This is a cross-over episode from our new show The Machine Learning Podcast, the show about going from idea to production with machine learning.
Summary
Machine learning has the potential to transform industries and revolutionize business capabilities, but only if the models are reliable and robust. Because of the fundamental probabilistic nature of machine learning techniques it can be challenging to test and validate the generated models. The team at Deepchecks understands the widespread need to easily and repeatably check and verify the outputs of machine learning models and the complexity involved in making it a reality. In this episode Shir Chorev and Philip Tannor explain how they are addressing the problem with their open source deepchecks library and how you can start using it today to build trust in your machine learning applications.
Announcements
Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.
Do you wish you could use artificial intelligence to drive your business the way Big Tech does, but don’t have a money printer? Graft is a cloud-native platform that aims to make the AI of the 1% accessible to the 99%. Wield the most advanced techniques for unlocking the value of data, including text, images, video, audio, and graphs. No machine learning skills required, no team to hire, and no infrastructure to build or maintain. For more information on Graft or to schedule a demo, visit themachinelearningpodcast.com/graft today and tell them Tobias sent you.
Predibase is a low-code ML platform without low-code limits. Built on top of our open source foundations of Ludwig and Horovod, our platform allows you to train state-of-the-art ML and deep learning models on your datasets at scale. Our platform works on text, images, tabular, audio and multi-modal data using our novel compositional model architecture. We allow users to operationalize models on top of the modern data stack, through REST and PQL – an extension of SQL that puts predictive power in the hands of data practitioners. Go to themachinelearningpodcast.com/predibase today to learn more and try it out!
Data powers machine learning, but poor data quality is the largest impediment to effective ML today. Galileo is a collaborative data bench for data scientists building Natural Language Processing (NLP) models to programmatically inspect, fix and track their data across the ML workflow (pre-training, post-training and post-production) – no more excel sheets or ad-hoc python scripts. Get meaningful gains in your model performance fast, dramatically reduce data labeling and procurement costs, while seeing 10x faster ML iterations. Galileo is offering listeners a free 30 day trial and a 30% discount on the product there after. This offer is available until Aug 31, so go to themachinelearningpodcast.com/galileo and request a demo today!
Your host is Tobias Macey and today I’m interviewing Shir Chorev and Philip Tannor about Deepchecks, a Python package for comprehensively validating your machine learning models and data with minimal effort.
Interview
Introduction
How did you get involved in machine learning?
Can you describe what Deepchecks is and the story behind it?
Who is the target audience for the project?
What are the biggest challenges that these users face in bringing ML models from concept to production and how does DeepChecks address those problems?
In the absence of DeepChecks how are practitioners solving the problems of model validation and comparison across iteratiosn?
What are some of the other tools in this ecosystem and what are the differentiating features of DeepChecks?
What are some examples of the kinds of tests that are useful for understanding the "correctness" of models?
What are the methods by which ML engineers/data scientists/domain experts can define what "correctness" means in a given model or subject area?
In software engineering the categories of tests are tiered as unit -> integration -> end-to-end. What are the relevant categories of tests that need to be built for validating the behavior of machine learning models?
How do model monitoring utilities overlap with the kinds of tests that you are building with deepchecks?
Can you describe how the DeepChecks package is implemented?
How have the design and goals of the project changed or evolved from when you started working on it?
What are the assumptions that you have built up from your own experiences that have been challenged by your early users and design partners?
Can you describe the workflow for an individual or team using DeepChecks as part of their model training and deployment lifecycle?
Test engineering is a deep discipline in its own right. How have you approached the user experience and API design to reduce the overhead for ML practitioners to adopt good practices?
What are the interfaces available for creating reusable tests and composing test suites together?
What are the additional services/capabilities that you are providing in your commercial offering?
How are you managing the governance and sustainability of the OSS project and balancing that against the needs/priorities of the business?
What are the most interesting, innovative, or unexpected ways that you have seen DeepChecks used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on DeepChecks?
When is DeepChecks the wrong choice?
What do you have planned for the future of DeepChecks?
Contact Info
Shir
LinkedIn
shir22 on GitHub
Philip
LinkedIn
@philiptannor on Twitter
Parting Question
From your perspective, what is the biggest barrier to adoption of machine learning today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
DeepChecks
Random Forest
Talpiot Program
SHAP
Podcast.__init__ Episode
Airflow
Great Expectations
Data Engineering Podcast Episode
The intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
Preamble
This is a cross-over episode from our new show The Machine Learning Podcast, the show about going from idea to production with machine learning.
Summary
Building an ML model is getting easier than ever, but it is still a challenge to get that model in front of the people that you built it for. Baseten is a platform that helps you quickly generate a full stack application powered by your model. You can easily create a web interface and APIs powered by the model you created, or a pre-trained model from their library. In this episode Tuhin Srivastava, co-founder of Basten, explains how the platform empowers data scientists and ML engineers to get their work in production without having to negotiate for help from their application development colleagues.
Announcements
Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Your host is Tobias Macey and today I’m interviewing Tuhin Srivastava about Baseten, an ML Application Builder for data science and machine learning teams
Interview
Introduction
How did you get involved in machine learning?
Can you describe what Baseten is and the story behind it?
Who are the target users for Baseten and what problems are you solving for them?
What are some of the typical technical requirements for an application that is powered by a machine learning model?
In the absence of Baseten, what are some of the common utilities/patterns that teams might rely on?
What kinds of challenges do teams run into when serving a model in the context of an application?
There are a number of projects that aim to reduce the overhead of turning a model into a usable product (e.g. Streamlit, Hex, etc.). What is your assessment of the current ecosystem for lowering the barrier to product development for ML and data science teams?
Can you describe how the Baseten platform is designed?
How have the design and goals of the project changed or evolved since you started working on it?
How do you handle sandboxing of arbitrary user-managed code to ensure security and stability of the platform?
How did you approach the system design to allow for mapping application development paradigms into a structure that was accessible to ML professionals?
Can you describe the workflow for building an ML powered application?
What types of models do you support? (e.g. NLP, computer vision, timeseries, deep neural nets vs. linear regression, etc.)
How do the monitoring requirements shift for these different model types?
What other challenges are presented by these different model types?
What are the limitations in size/complexity/operational requirements that you have to impose to ensure a stable platform?
What is the process for deploying model updates?
For organizations that are relying on Baseten as a prototyping platform, what are the options for taking a successful application and handing it off to a product team for further customization?
What are the most interesting, innovative, or unexpected ways that you have seen Baseten used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Baseten?
When is Baseten the wrong choice?
What do you have planned for the future of Baseten?
Contact Info
@tuhinone on Twitter
LinkedIn
Parting Question
From your perspective, what is the biggest barrier to adoption of machine learning today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
Baseten
Gumroad
scikit-learn
Tensorflow
Keras
Streamlit
Podcast.__init__ Episode
Retool
Hex
Podcast.__init__ Episode
Kubernetes
React Monaco
Huggingface
Airtable
Dall-E 2
GPT-3
Weights and Biases
The intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
Summary
Starting a new project is always exciting and full of possibility, until you have to set up all of the repetitive boilerplate. Fortunately there are useful project templates that eliminate that drudgery. PyScaffold goes above and beyond simple template repositories, and gives you a toolkit for different application types that are packed with best practices to make your life easier. In this episode Florian Wilhelm shares the story behind PyScaffold, how the templates are designed to reduce friction when getting a new project off the ground, and how you can extend it to suit your needs. Stop wasting time with boring boilerplate and get straight to the fun part with PyScaffold!
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great!
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Your host as usual is Tobias Macey and today I’m interviewing Florian Wilhelm about PyScaffold, a Python project template generator with batteries included
Interview
Introductions
How did you get introduced to Python?
Can you describe what PyScaffold is and the story behind it?
What is the main goal of the project?
There are a huge number of templates and starter projects available (both in Python and other languages). What are the aspects of PyScaffold that might encourage someone to adopt it?
What are the different types/categories of applications that you are focused on supporting with the scaffolding?
For each category, what is your selection process for which dependencies to include?
How do you approach the work of keeping the various components up to date with community "best practices"?
Can you describe how PyScaffold is implemented?
How have the design and goals of the project changed since you first started it?
What is the user experience for someone bootstrapping a project with PyScaffold?
How can you adapt an existing project into the structure of a pyscaffold template?
Are there any facilities for updating a project started with PyScaffold to include patches/changes in the source template?
What are the most interesting, innovative, or unexpected ways that you have seen PyScaffold used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on PyScaffold?
When is PyScaffold the wrong choice?
What do you have planned for the future of PyScaffold?
Keep In Touch
Website
LinkedIn
FlorianWilhelm on GitHub
@florianwilhelm on Twitter
Picks
Tobias
Daredevil TV series
Florian
The Peripheral
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
PyScaffold
Innovex
SAP
Cookiecutter
Pytest
Podcast Episode
Sphinx
pre-commit
Podcast Episode
Black
Flake8
Podcast Episode
Poetry
Setuptools
mkdocs
ReStructured Text
Markdown
Setuptools-SCM
Hatch
Flit
Versioneer
Gource git visualization
MyPy Compiler
Rust Cargo
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Summary
Application configuration is a deceptively complex problem. Everyone who is building a project that gets used more than once will end up needing to add configuration to control aspects of behavior or manage connections to other systems and services. At first glance it seems simple, but can quickly become unwieldy. Bruno Rocha created Dynaconf in an effort to provide a simple interface with powerful capabilities for managing settings across environments with a set of strong opinions. In this episode he shares the story behind the project, how its design allows for adapting to various applications, and how you can start using it today for your own projects.
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great!
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Your host as usual is Tobias Macey and today I’m interviewing Bruno Rocha about Dynaconf, a powerful and flexible framework for managing your application’s configuration settings
Interview
Introductions
How did you get introduced to Python?
Can you describe what Dynaconf is and the story behind it?
What are your main goals for Dynaconf?
What kinds of projects (e.g. web, devops, ML, etc.) are you focused on supporting with Dynaconf?
Settings management is a deceptively complex and detailed aspect of software engineering, with a lot of conflicting opinions about the "right way". What are the design philosophies that you lean on for Dynaconf?
Many engineers end up building their own frameworks for managing settings as their use cases and environments get increasingly complicated. What are some of the ways that those efforts can go wrong or become unmaintainable?
Can you describe how Dynaconf is implemented?
How have the design and goals of the project evolved since you first started it?
What is the workflow for getting started with Dynaconf on a new project?
How does the usage scale with the complexity of the host project?
What are some strategies that you recommend for integrating Dynaconf into an existing project that already has complex requirements for settings across multiple environments?
Secrets management is one of the most frequently under- or over-engineered aspects of application configuration. What are some of the ways that you have worked to strike a balance of making the "right way" easy?
What are some of the more advanced or under-utilized capabilities of Dynaconf?
What are the most interesting, innovative, or unexpected ways that you have seen Dynaconf used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Dynaconf?
When is Dynaconf the wrong choice?
What do you have planned for the future of Dynaconf?
Keep In Touch
rochacbruno on GitHub
@rochacbruno on Twitter
Website
LinkedIn
Picks
Tobias
SOPS
Bruno
Severance tv series
Learn Rust
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
Dynaconf
Dynaconf GitHub Org
Ansible
Bash
Perl
12 Factor Applications
TOML
Hashicorp Vault
Pydantic
Airflow
Hydroconf
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Summary
Software is eating the world, but that code has to have hardware to execute the instructions. Most people, and many software engineers, don’t have a proper understanding of how that hardware functions. Charles Petzold wrote the book "Code: The Hidden Language of Computer Hardware and Software" to make this a less opaque subject. In this episode he discusses what motivated him to revise that work in the second edition and the additional details that he packed in to explore the functioning of the CPU.
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great!
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Your host as usual is Tobias Macey and today I’m interviewing Charles Petzold about his work on the second edition of Code: The Hidden Language of Computer Hardware and Software
Interview
Introductions
How did you get introduced to Python?
Can you start by describing the focus and goal of "Code" and the story behind it?
Who is the target audience for the book?
The sequencing of the topics parallels the curriculum of a computer engineering course of study. Why do you think that it is useful/important for a general audience to understand the electrical engineering principles that underly modern computers?
What was your process for determining how to segment the information that you wanted to address in the book to balance the pacing of the reader with the density of the information?
Technical books are notoriously challenging to write due to the constantly changing subject matter. What are some of the ways that the first edition of "Code" was becoming outdated?
What are the most notable changes in the foundational elements of computing that have happened in the time since the first edition was published?
One of the concepts that I have found most helpful as a software engineer is that of "mechanical sympathy". What are some of the ways that a better understanding of computer hardware and electrical signal processing can influence and improve the way that an engineer writes code?
What are some of the insights that you gained about your own use of computers and software while working on this book?
What are the most interesting, unexpected, or challenging lessons that you have learned while writing "Code" and revising it for the second edition?
Once the reader has finished with your book, what are some of the other references/resources that you recommend?
Keep In Touch
Website
Picks
Tobias
The Imitation Game movie
Charles
The Annotated Turing book by Charles Petzold
Confidence Man: The Making of Donald Trump and the Breaking of America by Maggie Haberman
Links
Code: The Hidden Language of Computer Hardware and Software
Fortran
PL/I
BASIC
C#
Z80
Intel 8080
PC Magazine
Assembly Language
Logic Gates
C Language
ASCII == American Standard Code for Information Interchange
SkiaSharp
Algol
Code first edition bibliography
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Summary
The generation, distribution, and consumption of energy is one of the most critical pieces of infrastructure for the modern world. With the rise of renewable energy there is an accompanying need for systems that can respond in real-time to the availability and demand for electricity. FlexMeasures is an open source energy management system that is designed to integrate a variety of inputs intelligently allocate energy resources to reduce waste in your home or grid. In this episode Nicolas Höning explains how the project is implemented, how it is being used in his startup Seita, and how you can try it out for your own energy needs.
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great!
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Your host as usual is Tobias Macey and today I’m interviewing Nicolas Höning about FlexMeasures, an open source project designed to manage energy resources dynamically to improve efficiency
Interview
Introductions
How did you get introduced to Python?
Can you describe what FlexMeasures is and the story behind it?
What are the primary goals/objectives of the project?
The energy sector is huge. Where can FlexMeasures be used?
Energy systems are typically governed by a marketplace system. What are the benefits that FlexMeasures can provide for each side of that market?
How do renewable sources of energy confuse/complicate the role that the different stakeholders represent?
What are the different points of interaction that producers/consumers might have with the FlexMeasures platform?
What are some examples of the types of decisions/recommendations that FlexMeasures might generate and how to they manifest in the energy systems?
What are the types of information that FlexMeasures relies on for driving those decisions?
Can you describe how FlexMeasures is implemented?
How have the design and goals of the system changed/evolved since you started working on it?
What are the interfaces that you provide for integrating with and extending the functionality of a FlexMeasures installation?
What are the operating scales that FlexMeasures is designed for?
What are the most interesting, innovative, or unexpected ways that you have seen FlexMeasures used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on FlexMeasures?
When is FlexMeasures the wrong choice?
What do you have planned for the future of FlexMeasures?
Keep In Touch
Website
@nhoening on Twitter
LinkedIn
Picks
Tobias
She-Hulk
Nicholas
Kleo on Netflix
Altair
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
FlexMeasures:
Github
Linux Energy Foundation
Mailing List
Twitter
EyeQuant
Energy Management System
OpenEMS
ICT == Information and Communications Technology
HomeAssistant
Podcast Episode
FlexMeasures HomeAssistant Plugin
Universal Smart Energy Framework
PostgreSQL
Data Engineering Podcast Episode
TimescaleDB
Data Engineering Podcast Episode
OpenWeatherMap
Timely-Beliefs library
Flask
Click
Pyomo
scikit-learn
sktime
LF Energy
Flake8
MyPy
Podcast Episode
Black
Arima Model
Random Forest
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Summary
Your ability to build and maintain a software project is tempered by the strength of the team that you are working with. If you are in a position of leadership, then you are responsible for the growth and maintenance of that team. In this episode Jigar Desai, currently the SVP of engineering at Sisu Data, shares his experience as an engineering leader over the past several years and the useful insights he has gained into how to build effective engineering teams.
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great!
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
The biggest challenge with modern data systems is understanding what data you have, where it is located, and who is using it. Select Star’s data discovery platform solves that out of the box, with a fully automated catalog that includes lineage from where the data originated, all the way to which dashboards rely on it and who is viewing them every day. Just connect it to your dbt, Snowflake, Tableau, Looker, or whatever you’re using and Select Star will set everything up in just a few hours. Go to pythonpodcast.com/selectstar today to double the length of your free trial and get a swag package when you convert to a paid plan.
Your host as usual is Tobias Macey and today I’m interviewing Jigar Desai about building effective engineering teams
Interview
Introductions
How did you get introduced to Python?
What have you found to be the central challenges involved in building an effective engineering team?
What are the measures that you use to determine what "effective" means for a given team?
how to establish mutual trust in an engineering team
challenges introduced at different levels of team size/organizational complexity
establishing and managing career ladders
You have mostly worked in heavily tech-focused companies. How do industry verticals impact the ways that you think about formation and structure of engineering teams?
What are some of the different roles that you might focus on hiring/team compositions in industries that aren’t purely software? (e.g. fintech, logistics, etc.)
notable evolutions in engineering practices/paradigm shifts in the industry
What are some of the predictions that you have about how the future of engineering will look?
What impact do you think low-code/no-code solutions will have on the types of projects that code-first developers will be tasked with?
What are the most interesting, innovative, or unexpected ways that you have seen organizational leaders address the work of building and scaling engineering capacity?
What are the most interesting, unexpected, or challenging lessons that you have learned while working in engineering leadership?
What are the most informative mistakes that you would like to share?
What are some resources and reference material that you recommend for anyone responsible for the success of their engineering teams?
Keep In Touch
LinkedIn
Picks
Tobias
Bullet Train movie
Jigar
Top Gun Maverick movie
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
Sisu Data
OpenStack
Java
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Summary
Working on hardware projects often has significant friction involved when compared to pure software. Brian Pugh enjoys tinkering with microcontrollers, but his "weekend projects" often took longer than a weekend to complete, so he created Belay. In this episode he explains how Belay simplifies the interactions involved in developing for MicroPython boards and how you can use it to speed up your own experimentation.
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great!
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
The biggest challenge with modern data systems is understanding what data you have, where it is located, and who is using it. Select Star’s data discovery platform solves that out of the box, with a fully automated catalog that includes lineage from where the data originated, all the way to which dashboards rely on it and who is viewing them every day. Just connect it to your dbt, Snowflake, Tableau, Looker, or whatever you’re using and Select Star will set everything up in just a few hours. Go to pythonpodcast.com/selectstar today to double the length of your free trial and get a swag package when you convert to a paid plan.
Your host as usual is Tobias Macey and today I’m interviewing Brian Pugh about Belay, a python library that enables the rapid development of projects that interact with hardware via a micropython-compatible board.
Interview
Introductions
How did you get introduced to Python?
Can you describe what Belay is and the story behind it?
Who are the target users for Belay?
What are some of the points of friction involved in developing for hardware projects?
What are some of the features of Belay that make that a smoother process?
What are some of the ways that simplifying the develop/debug cycles can improve the overall experience of developing for hardware platforms?
What are some of the inherent limitations of constrained hardware that Belay is unable to paper over?
Can you describe how Belay is implemented?
What does the workflow look like when using Belay as compared to using MicroPython directly?
What are some of the ways that you are using Belay in your own projects?
What are the most interesting, innovative, or unexpected ways that you have seen Belay used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Belay?
When is Belay the wrong choice?
What do you have planned for the future of Belay?
Keep In Touch
BrianPugh on GitHub
LinkedIn
Picks
Tobias
Gunnar Computer Glasses
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
Belay
Geomagical
PIC Microcontroller
AVR Microcontroller
Matlab
MicroPython
Podcast Episode
CircuitPython
Podcast Episode
Celery
Potentiometer
Raspberry Pi
Raspberry Pi Pico
ADC Converter
Thonny
Podcast Episode
Adafruit
Pyboard
Python Inspect Module
Python Tokenize
Magnetometer Project
Lidar
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Summary
Static typing versus dynamic typing is one of the oldest debates in software development. In recent years a number of dynamic languages have worked toward a middle ground by adding support for type hints. Python’s type annotations have given rise to an ecosystem of tools that use that type information to validate the correctness of programs and help identify potential bugs. At Instagram they created the Pyre project with a focus on speed to allow for scaling to huge Python projects. In this episode Shannon Zhu discusses how it is implemented, how to use it in your development process, and how it compares to other type checkers in the Python ecosystem.
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Your host as usual is Tobias Macey and today I’m interviewing Shannon Zhu about Pyre, a type checker for Python 3 built from the ground up to support gradual typing and deliver responsive incremental checks
Interview
Introductions
How did you get introduced to Python?
Can you describe what Pyre is and the story behind it?
There have been a number of tools created to support various aspects of typing for Python. How would you describe the various goals that they support and how Pyre fits in that ecosystem?
What are the core goals and notable features of Pyre?
Can you describe how Pyre is implemented?
How have the design and goals of the project changed/evolved since you started working on it?
What are the different ways that Pyre is used in the development workflow for a team or individual?
What are some of the challenges/roadblocks that people run into when adopting type definitions in their Python projects?
How has the evolution of type annotations and overall support for them affected your work on Pyre?
As someone who is working closely with type systems, what are the strongest aspects of Python’s implementation and opportunities for improvement?
What are the most interesting, innovative, or unexpected ways that you have seen Pyre used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Pyre?
When is Pyre the wrong choice?
What do you have planned for the future of Pyre?
Keep In Touch
shannonzhu on GitHub
Picks
Tobias
Lord Of The Rings: The Rings of Power on Amazon Video
Shannon
King’s Dilemma board game
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
PYre
MyPy
Podcast Episode
PyRight
PyType
MonkeyType
Podcast Episode
Java
C
PEP 484
Flow
Hack
Continuous Integration
OCaml
PEP 675 – Arbitrary literal strings
Gradual Typing
AST == Abstract Syntax Tree
Language Server Protocol
Tensor
Type Arithmetic
PyCon: Securing Code With The Python Type System
PyCon: Type Checked Python In The Real World
PyCon: Łukasz Lange 2022 Keynote
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Summary
Every software project is subject to a series of decisions and tradeoffs. One of the first decisions to make is which programming language to use. For companies where their product is software, this is a decision that can have significant impact on their overall success. In this episode Sean Knapp discusses the languages that his team at Ascend use for building a service that powers complex and business critical data workflows. He also explains his motivation to standardize on Python for all layers of their system to improve developer productivity.
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Your host as usual is Tobias Macey and today I’m interviewing Sean Knapp about his motivations and experiences standardizing on Python for development at Ascend
Interview
Introductions
How did you get introduced to Python?
Can you describe what Ascend is and the story behind it?
How many engineers work at Ascend?
What are their different areas of focus?
What are your policies for selecting which technologies (e.g. languages, frameworks, dev tooling, deployment, etc.) are supported at Ascend?
What does it mean for a technology to be supported?
You recently started standardizing on Python as the default language for development. How has Python been used up to now?
What other languages are in common use at Ascend?
What are some of the challenges/difficulties that motivated you to establish this policy?
What are some of the tradeoffs that you have seen in the adoption of Python in place of your other adopted languages?
How are you managing ongoing maintenance of projects/products that are not written in Python?
What are some of the potential pitfalls/risks that you are guarding against in your investment in Python?
What are the most interesting, innovative, or unexpected ways that you have seen Python used where it was previously a different technology?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on aligning all of your development on a single language?
When is Python the wrong choice?
What do you have planned for the future of engineering practices at Ascend?
Keep In Touch
LinkedIn
@seanknapp on Twitter
Picks
Tobias
Delver Lens app for scanning Magic: The Gathering cards
Sean
Typer
DuckDB
Amp It Up book (affiliate link)
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
Ascend
Data Engineering Podcast Episode
Perl
Google Sawzall
Technical Debt
Ruby
gRPC
Go Language
Java
PySpark
Apache Arrow
Thrift
SQL
Scala
Snowflake runtime for Python Snowpark
Typer CLI framework
Pydantic
Podcast Episode
Pulumi
Podcast Episode
PyInfra
Podcast Episode
Packer
Plot.ly Dash
DuckDB
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Summary
Writing code is only one piece of creating good software. Code reviews are an important step in the process of building applications that are maintainable and sustainable. In this episode On Freund shares his thoughts on the myriad purposes that code reviews serve, as well as exploring some of the patterns and anti-patterns that grow up around a seemingly simple process.
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Your host as usual is Tobias Macey and today I’m interviewing On Freund about the intricacies and importance of code reviews
Interview
Introductions
How did you get introduced to Python?
Can you start by giving us your description of what a code review is?
What is the purpose of the code review?
At face value a code review appears to be a simple task. What are some of the subtleties that become evident with time and experience?
What are some of the ways that code reviews can go wrong?
What are some common anti-patterns that get applied to code reviews?
What are the elements of code review that are useful to automate?
What are some of the risks/bad habits that can result from overdoing automated checks/fixes or over-reliance on those tools in code reviews?
identifying who can/should do a review for a piece of code
how to use code reviews as a teaching tool for new/junior engineers
how to use code reviews for avoiding siloed experience/promoting cross-training
PR templates for capturing relevant context
What are the most interesting, innovative, or unexpected ways that you have seen code reviews used?
What are the most interesting, unexpected, or challenging lessons that you have learned while leading and supporting engineering teams?
What are some resources that you recommend for anyone who wants to learn more about code review strategies and how to use them to scale their teams?
Keep In Touch
LinkedIn
@onfreund on Twitter
Picks
Tobias
The Girl Who Drank The Moon
On
Better Call Saul
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
Wilco
Code Review
Home Assistant
Podcast Episode
Trunk-based Development
Git Flow
Pair Programming
Feature Flags
Podcast Episode
KPI == Key Performance Indicator
MIT Open Learning Engineering Handbook
PEP Repository
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Summary
Quality assurance in the software industry has become a shared responsibility in most organizations. Given the rapid pace of development and delivery it can be challenging to ensure that your application is still working the way it’s supposed to with each release. In this episode Jonathon Wright discusses the role of quality assurance in modern software teams and how automation can help.
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Your host as usual is Tobias Macey and today I’m interviewing Jonathon Wright about the role of automation in your testing and QA strategies
Interview
Introductions
How did you get introduced to Python?
Can you share your relationship with software testing/QA and automation?
What are the main categories of how companies and software teams address testing and validation of their applications?
What are some of the notable tradeoffs/challenges among those approaches?
With the increased adoption of agile practices and the "shift left" mentality of DevOps, who is responsible for software quality?
What are some of the cases where a discrete QA role or team becomes necessary? (or is it always necessary?)
With testing and validation being a shared responsibility, competing with other priorities, what role does automation play?
What are some of the ways that automation manifests in software quality and testing?
How is automation distinct from software tests and CI/CD?
For teams who are investing in automation for their applications, what are the questions they should be asking to identify what solutions to adopt? (what are the decision points in the build vs. buy equation?)
At what stage(s) of the software lifecycle does automation live?
What is the process for identifying which capabilities and interactions to target during the initial application of automation for QA and validation?
One of the perennial challenges with any software testing, particularly for anything in the UI, is that it is a constantly moving target. What are some of the patterns and techniques, both from a developer and tooling perspective, that increase the robustness of automated validation?
What are the most interesting, innovative, or unexpected ways that you have seen automation used for QA?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on QA and automation?
When is automation the wrong choice?
What are some of the resources that you recommend for anyone who wants to learn more about this topic?
Keep In Touch
LinkedIn
@Jonathon_Wright on Twitter
Website
Picks
Tobias
The Sandman Netflix series and Graphic Novels by Neil Gaimain
Jonathon
House of the Dragon HBO series
Mystic Quest TV series
It’s Always Sunny in Philadelphia
Links
Haskell
Idris
Esperanto
Klingon
Planguage
Lisp Language
TDD == Test Driven Development
BDD == Behavior Driven Development
Gherkin Format
Integration Testing
Chaos Engineering
Gremlin
Chaos Toolkit
Podcast Episode
Requirements Engineering
Keysight
QA Lead Podcast
Cognitive Learning TED Talk
OpenTelemetry
Podcast Episode
Quality Engineering
Selenium
Swagger
XPath
Regular Expression
Test Guild
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Summary
The goal of every software team is to get their code into production without breaking anything. This requires establishing a repeatable process that doesn’t introduce unnecessary roadblocks and friction. In this episode Ronak Rahman discusses the challenges that development teams encounter when trying to build and maintain velocity in their work, the role that access to infrastructure plays in that process, and how to build automation and guardrails for everyone to take part in the delivery process.
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Your host as usual is Tobias Macey and today I’m interviewing Ronak Rahman about how automating the path to production helps to build and maintain development velocity
Interview
Introductions
How did you get introduced to Python?
Can you describe what Quali is and the story behind it?
What are the problems that you are trying to solve for software teams?
How does Quali help to address those challenges?
What are the bad habits that engineers fall into when they experience friction with getting their code into test and production environments?
How do those habits contribute to negative feedback loops?
What are signs that developers and managers need to watch for that signal the need for investment in developer experience improvements on the path to production?
Can you describe what you have built at Quali and how it is implemented?
How have the design and goals shifted/evolved from when you first started working on it?
What are the positive and negative impacts that you have seen from the evolving set of options for application deployments? (e.g. K8s, containers, VMs, PaaS, FaaS, etc.)
Can you describe how Quali fits into the workflow of software teams?
Once a team has established patterns for deploying their software, what are some of the disruptions to their flow that they should guard against?
What are the most interesting, innovative, or unexpected ways that you have seen Quali used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Quali?
When is Quali the wrong choice?
What do you have planned for the future of Quali?
Keep In Touch
@OfRonak on Twitter
Picks
Tobias
The Terminal List on Amazon
Ronak
Midnight Gospel on Amazon
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
Quali
Torque
Visual Studio Plugin
Subversion
IaC == Infrastructure as Code
DevOps
Terraform
Pulumi
Podcast Episode
Cloudformation
Flask
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Summary
Every startup begins with an idea, but that won’t get you very far without testing the feasibility of that idea. A common practice is to build a Minimum Viable Product (MVP) that addresses the problem that you are trying to solve and working with early customers as they engage with that MVP. In this episode Tony Pavlovych shares his thoughts on Python’s strengths when building and launching that MVP and some of the potential pitfalls that businesses can run into on that path.
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Your host as usual is Tobias Macey and today I’m interviewing Tony Pavlovych about Python’s strengths for startups and the steps to building an MVP (minimum viable product)
Interview
Introductions
How did you get introduced to Python?
Can you describe what PLANEKS is and the story behind it?
One of the services that you offer is building an MVP. What are the goals and outcomes associated with an MVP?
What is the process for identifying the product focus and feature scope?
What are some of the common misconceptions about building and launching MVPs that you have dealt with in your work with customers?
What are the common pitfalls that companies encounter when building and validating an MVP?
Can you describe the set of tools and frameworks (e.g. Django, Poetry, cookiecutter, etc.) that you have invested in to reduce the overhead of starting and maintaining velocity on multiple projects?
What are the configurations that are most critical to keep constant across projects to maintain familiarity and sanity for your developers? (e.g. linting rules, build toolchains, etc.)
What are the architectural patterns that you have found most useful to make MVPs flexible for adaptation and extension?
Once the MVP is built and launched, what are the next steps to validate the product and determine priorities?
What benefits do you get from choosing Python as your language for building an MVP/launching a startup?
What are the challenges/risks involved in that choice?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on MVPs for your clients at PLANEKS?
When is an MVP the wrong choice?
What are the developments in the Python and broader software ecosystem that you are most interested in for the work you are doing for your team and clients?
Keep In Touch
LinkedIn
Picks
Tobias
datamodel-code-generator
Tony
Screw It, Let’s Do It by Richard Branson (affiliate link)
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
PLANEKS
Minimum Viable Product
Django
Cookiecutter
Django Boilerplate
OCR == Optical Character Recognition
Tesseract OCR framework
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Summary
Application architectures have been in a constant state of evolution as new infrastructure capabilities are introduced. Virtualization, cloud, containers, mobile, and now web assembly have each introduced new options for how to build and deploy software. Recognizing the transformative potential of web assembly, Matt Butcher and his team at Fermyon are investing in tooling and services to improve the developer experience. In this episode he explains the opportunity that web assembly offers to all language communities, what they are building to power lightweight server-side microservices, and how Python developers can get started building and contributing to this nascent ecosystem.
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Need to automate your Python code in the cloud? Want to avoid the hassle of setting up and maintaining infrastructure? Shipyard is the premier orchestration platform built to help you quickly launch, monitor, and share python workflows in a matter of minutes with 0 changes to your code. Shipyard provides powerful features like webhooks, error-handling, monitoring, automatic containerization, syncing with Github, and more. Plus, it comes with over 70 open-source, low-code templates to help you quickly build solutions with the tools you already use. Go to dataengineeringpodcast.com/shipyard to get started automating with a free developer plan today!
Your host as usual is Tobias Macey and today I’m interviewing Matt Butcher about Fermyon and the impact of WebAssembly on software architecture and deployment across language boundaries
Interview
Introductions
How did you get introduced to Python?
For anyone who isn’t familiar with WebAssembly can you give your elevator pitch for why it matters?
What is the current state of language support for Python in the WASM ecosystem?
Can you describe what Fermyon is and the story behind it?
What are your goals with Fermyon and what are the products that you are building to support those goals?
There has been a steady progression of technologies aimed at better ways to build, deploy, and manage software (e.g. virtualization, cloud, containers, etc.). What are the problems with the previous options and how does WASM address them?
What are some examples of the types of applications/services that work well in a WASM environment?
Can you describe how you have architected the Fermyon platform?
How did you approach the design of the interfaces and tooling to support developer ergonomics?
How have the design and goals of the platform changed or evolved since you started working on it?
Can you describe what a typical workflow is for an application team that is using Spin/Fermyon to build and deploy a service?
What are some of the architectural patterns that WASM/Fermyon encourage?
What are some of the limitations that WASM imposes on services using it as a runtime? (e.g. system access, threading/multiprocessing, library support, C extensions, etc.)
What are the new and emerging topics and capabilities in the WASM ecosystem that you are keeping track of?
With Spin as the core building block of your platform, how are you approaching governance and sustainability of the open source project?
What are your guiding principles for when a capability belongs in the OSS vs. commercial offerings?
What are the most interesting, innovative, or unexpected ways that you have seen Fermyon used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Fermyon?
When is Fermyon the wrong choice?
What do you have planned for the future of Fermyon?
Keep In Touch
LinkedIn
@technosophos on Twitter
technosophos on GitHub
Picks
Tobias
Thor: Love & Thunder movie
Matt
Remembrance of Earth’s Past trilogy ("Three Body Problem" is the first) by Cixin Liu
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
Fermyon
Our Python entry for the Wasm Language Matrix
SingleStore’s WASI-Python
Great notes about Wasm support in CPyton
Pyodide for Python in the Browser
SlashDot
Web Assembly (WASM)
Rust
AssemblyScript
Grain WASM language
SingleStore
Data Engineering Podcast Episode
WASI
PyO3
PyOxidizer
RustPython
Drupal
OpenStack
Deis
Helm
RedPanda
Data Engineering Podcast Episode
Envoy Proxy
Fastly
Functions as a Service
CloudEvents
Finicky Whiskers
Fermyon Spin
Nomad
Tree Shaking
Zappa
Chalice
OpenFaaS
CNCF
Bytecode Alliance
Finicky Whiskers Minecraft
Kotlin
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Summary
As your code scales beyond a trivial level of complexity and sophistication it becomes difficult or impossible to know everything that it is doing. The flow of logic and data through your software and which parts are taking the most time are impossible to understand without help from your tools. VizTracer is the tool that you will turn to when you need to know all of the execution paths that are being exercised and which of those paths are the most expensive. In this episode Tian Gao explains why he created VizTracer and how you can use it to gain a deeper familiarity with the code that you are responsible for maintaining.
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Need to automate your Python code in the cloud? Want to avoid the hassle of setting up and maintaining infrastructure? Shipyard is the premier orchestration platform built to help you quickly launch, monitor, and share python workflows in a matter of minutes with 0 changes to your code. Shipyard provides powerful features like webhooks, error-handling, monitoring, automatic containerization, syncing with Github, and more. Plus, it comes with over 70 open-source, low-code templates to help you quickly build solutions with the tools you already use. Go to dataengineeringpodcast.com/shipyard to get started automating with a free developer plan today!
Your host as usual is Tobias Macey and today I’m interviewing Tian Gao about VizTracer, a low-overhead logging/debugging/profiling tool that can trace and visualize your python code execution
Interview
Introductions
How did you get introduced to Python?
Can you describe what VizTracer is and the story behind it?
What are the main goals that you are focused on with VizTracer?
What are some examples of the types of bugs that profiling can help diagnose?
How does profiling work together with other debugging approaches? (e.g. logging, breakpoint debugging, etc.)
There are a number of profiling utilities for Python. What feature or combination of features were missing that motivated you to create VizTracer?
Can you describe how VizTracer is implemented?
How have the design and goals changed since you started working on it?
There are a number of styles of profiling, what was your process for deciding which approach to use?
What are the most complex engineering tasks involved in building a profiling utility?
Can you describe the process of using VizTracer to identify and debug errors and performance issues in a project?
What are the options for using VizTracer in a production environment?
What are the interfaces and extension points that you have built in to allow developers to customize VizTracer?
What are some of the ways that you have used VizTracer while working on VizTracer?
What are the most interesting, innovative, or unexpected ways that you have seen VizTracer used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on VizTracer?
When is VizTracer the wrong choice?
What do you have planned for the future of VizTracer?
Keep In Touch
gaogaotiantian on GitHub
LinkedIn
Picks
Tobias
Travelers show on Netflix
Tian
objprint
Lincoln Lawyer
bilibili – Tian’s coding sessions in Chinese
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
Viztracer
Python cProfile
Sampling Profiler
Perfetto
Coverage.py
Podcast Episode
Python setxprofile hook
Circular Buffer
Catapult Trace Viewer
py-spy
psutil
gdb
Flame graph
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Summary
Analysis of streaming data in real time has long been the domain of big data frameworks, predominantly written in Java. In order to take advantage of those capabilities from Python requires using client libraries that suffer from impedance mis-matches that make the work harder than necessary. Bytewax is a new open source platform for writing stream processing applications in pure Python that don’t have to be translated into foreign idioms. In this episode Bytewax founder Zander Matheson explains how the system works and how to get started with it today.
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
The biggest challenge with modern data systems is understanding what data you have, where it is located, and who is using it. Select Star’s data discovery platform solves that out of the box, with a fully automated catalog that includes lineage from where the data originated, all the way to which dashboards rely on it and who is viewing them every day. Just connect it to your dbt, Snowflake, Tableau, Looker, or whatever you’re using and Select Star will set everything up in just a few hours. Go to pythonpodcast.com/selectstar today to double the length of your free trial and get a swag package when you convert to a paid plan.
Need to automate your Python code in the cloud? Want to avoid the hassle of setting up and maintaining infrastructure? Shipyard is the premier orchestration platform built to help you quickly launch, monitor, and share python workflows in a matter of minutes with 0 changes to your code. Shipyard provides powerful features like webhooks, error-handling, monitoring, automatic containerization, syncing with Github, and more. Plus, it comes with over 70 open-source, low-code templates to help you quickly build solutions with the tools you already use. Go to dataengineeringpodcast.com/shipyard to get started automating with a free developer plan today!
Your host as usual is Tobias Macey and today I’m interviewing Zander Matheson about Bytewax, an open source Python framework for building highly scalable dataflows to process ANY data stream.
Interview
Introductions
How did you get introduced to Python?
Can you describe what Bytewax is and the story behind it?
Who are the target users for Bytewax?
What is the problem that you are trying to solve with Bytewax?
What are the alternative systems/architectures that you might replace with Bytewax?
Can you describe how Bytewax is implemented?
What are the benefits of Timely Dataflow as a core building block for a system like Bytewax?
How have the design and goals of the project changed/evolved since you first started working on it?
What are the axes available for scaling Bytewax execution?
How have you approached the design of the Bytewax API to make it accessible to a broader audience?
Can you describe what is involved in building a project with Bytewax?
What are some of the stream processing concepts that engineers are likely to run up against as they are experimenting and designing their code?
What is your motivation for providing the core technology of your business as an open source engine?
How are you approaching the balance of project governance and sustainability with opportunities for commercialization?
What are the most interesting, innovative, or unexpected ways that you have seen Bytewax used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Bytewax?
When is Bytewax the wrong choice?
What do you have planned for the future of Bytewax?
Keep In Touch
Slack
Twitter
LinkedIn
Picks
Tobias
Alta Racks
Zander
Atherton Bikes
Links
Bytewax
GitHub
Flink
Data Engineering Podcast Episode
Spark Streaming
Kafka Connect
Faust
Podcast Episode
Ray
Podcast Episode
Dask
Data Engineering Podcast Episode
Timely Dataflow
PyO3
Materialize
Data Engineering Podcast Episode
HyperLogLog
Python River Library
Shannon Entropy Calculation
The blog post using incremental shannon entropy
NATS
waxctl
Prometheus
Grafana
Streamz
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Summary
Building a fully functional web application has been growing in complexity along with the growing popularity of javascript UI frameworks such as React, Vue, Angular, etc. Users have grown to expect interactive experiences with dynamic page updates, which leads to duplicated business logic and complex API contracts between the server-side application and the Javascript front-end. To reduce the friction involved in writing and maintaining a full application Sam Willis created Tetra, a framework built on top of Django that embeds the Javascript logic into the Python context where it is used. In this episode he explains his design goals for the project, how it has helped him build applications more rapidly, and how you can start using it to build your own projects today.
Announcements
Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
So now your modern data stack is set up. How is everyone going to find the data they need, and understand it? Select Star is a data discovery platform that automatically analyzes & documents your data. For every table in Select Star, you can find out where the data originated, which dashboards are built on top of it, who’s using it in the company, and how they’re using it, all the way down to the SQL queries. Best of all, it’s simple to set up, and easy for both engineering and operations teams to use. With Select Star’s data catalog, a single source of truth for your data is built in minutes, even across thousands of datasets. Try it out for free and double the length of your free trial today at pythonpodcast.com/selectstar. You’ll also get a swag package when you continue on a paid plan.
Need to automate your Python code in the cloud? Want to avoid the hassle of setting up and maintaining infrastructure? Shipyard is the premier orchestration platform built to help you quickly launch, monitor, and share python workflows in a matter of minutes with 0 changes to your code. Shipyard provides powerful features like webhooks, error-handling, monitoring, automatic containerization, syncing with Github, and more. Plus, it comes with over 70 open-source, low-code templates to help you quickly build solutions with the tools you already use. Go to dataengineeringpodcast.com/shipyard to get started automating with a free developer plan today!
Your host as usual is Tobias Macey and today I’m interviewing Sam Willis about Tetra, a full stack component framework for your Django applications
Interview
Introductions
How did you get introduced to Python?
Can you describe what Tetra is and the story behind it?
What are the problems that you are aiming to solve with this project?
What are some of the other ways that you have addressed those problems?
What are the shortcomings that you encountered with those solutions?
What was missing in the existing landscape of full-stack application development patterns that prompted you to build a new meta-framework?
What are some of the sources of inspiration (positive and negative) that you looked to while deciding on the component selection and implementation strategy?
Can you describe how Tetra is implemented?
What are the core principles that you are relying on to drive your design of APIs and developer experience?
What is the process for building a full component in Tetra?
What are some of the application design challenges that are introduced by Combining the javascript and Django logic and attributes? (e.g. reusing JS logic/CSS styles across components)
A perennial challenge with combining the syntax across multiple languages in a single file is editor support. How are you thinking about that with Tetra’s implementation?
What is your grand vision for Tetra and how are you working to make it sustainable?
What are the most interesting, innovative, or unexpected ways that you have seen Tetra used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Tetra?
When is Tetra the wrong choice?
What do you have planned for the future of Tetra?
Keep In Touch
@samwillis on Twitter
Website
LinkedIn
samwillis on GitHub
Picks
Tobias
The Machine Learning Podcast
Sam
Slow Horses TV Show
Closing Announcements
Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
Tetra Framework
Django
PHP
ASP
Alpine.js
HTMX
Ruby
Ruby on Rails
Flutterbox
Vue.js
Laravel Livewire
Python Import Hooks
python-inline-source
Tailwind CSS
PostCSS
Pickle
Fernet
esbuild
Webpack
Rich
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
They mix the hearts of men they realize well how to feel like a man and how to trap him in the aching of affection. Visit Our Website: https://tannusinghal.in
I can say that after using Mac for almost a year, I thought that reasonable speed and performance were things I had to forget at that point. But not so long ago, I discovered this article https://setapp.com/how-to/lower-cpu-usage that helped me lower CPU usage on Mac, and it boosted the performance of my Mac a lot, so it turned out to be a real salvation for me.
We are Chandigarh's easiest escort service. The city is home to the most attractive Chandigarh Escorts ladies on the planet. We own Chandigarh's cutest female escort service. Our main objective is to help all the attractive girls in Chandigarh who wish to improve their lives. We won't speak for everyone. It would be great to solely discuss stunning women. Our call girl services in Chandigarh enjoy getting together and having fun. They enjoy taking their clients on vacation. https://bit.ly/3f8ffrt https://cutt.ly/G4xPWvI https://t.ly/zRsB https://thechandigarhescorts.in/ https://kudilo.in https://ginisharma.in/
When do you wish to take our service? Well, Chandigarh call girls are energetic babes who offer their services throughout the day and night. They can deliver you the most sizzling sexual time of your life as per your sexual needs. These babes never get tired. You need to take the service of the Independent escorts in Chandigarh. http://www.callgirlinchandigarh.net https://slides.com/d/Owk6Y9o/speaker/8mO2XD8 https://nargisalikhan997.pixnet.net/blog/post/99866702
We will make sure that you never receive fraudulent sexual services from us. Thus our escorts work on the same technique. They always stay by the side of their customers and look for giving them top-class sexual excitement. You do not have to get prepared for the session. Just hire our girls and you are ready to go with the best. You will definitely receive every option to tempt your wants with their sexual touches. The end of the service of our Independent Chandigarh escorts is always happiness. https://www.shalinikapoor.com https://www.pixnet.net/pcard/shalinikapoor/article/610ffa10-08ff-11ee-bf5d-27addbb6e700?utm_source=PIXNET&utm_medium=pcard_article&utm_content=70760647875ff826e1 https://chandigarh-escort.notion.site/Feel-satiation-with-call-girls-in-Chandigarh-65cc583b58ec490ba494297b4657ccac?pvs=4
Chandigarh Escorts provide the best sexual service. http://www.priyankabajaj.in
Chandigarh Escorts provide the best sexual service. http://www.priyankabajaj.in, https://www.pixnet.net/pcard/priyankabajaj1/article/648e5020-08e3-11ee-9568-77e76508add1?utm_source=PIXNET&utm_medium=pcard_article&utm_content=47071634675b387981
Classic Sounds, ThanksAisha Bajaj Agency Available in Outer City: https://www.aishabajaj.com/call-girls-chandigarh.php https://www.aishabajaj.com/call-girls-mumbai.php https://www.aishabajaj.com/call-girls-pune.php https://www.aishabajaj.com/call-girls-gurgaon.php https://www.aishabajaj.com/call-girls-hyderabad.php https://www.aishabajaj.com/call-girls-noida.php https://www.aishabajaj.com/call-girls-jaipur.php https://www.aishabajaj.com/call-girls-lucknow.php https://www.aishabajaj.com/call-girls-dehradun.php https://www.aishabajaj.com/call-girls-bangalore.php https://www.aishabajaj.com/call-girls-goa.php https://www.aishabajaj.com/call-girls-mohali.php https://www.aishabajaj.com/call-girls-panchkula.php https://www.aishabajaj.com/call-girls-zirakpur.php https://www.aishabajaj.com/call-girls-patiala.php https://www.aishabajaj.com/call-girls-jalandhar.php https://www.aishabajaj.com/call-girls-ludhiana.php https://www.aishabajaj.com/call-girls-manali.php https://www.aishabajaj.com/call-g
Its was Really Amazing Sounds. https://www.aishamahajan.com/call-girl-mahipalpur.php https://www.aishamahajan.com/call-girl-dwarka.php https://www.aishamahajan.com/call-girl-east-of-kailash.php https://www.aishamahajan.com/call-girl-greater-kailash.php https://www.aishamahajan.com/call-girl-green-park.php https://www.aishamahajan.com/call-girl-janak-puri.php https://www.aishamahajan.com/call-girl-karkar-dooma.php https://www.aishamahajan.com/call-girl-karol-bagh.php https://www.aishamahajan.com/call-girl-lajpat-nagar.php https://www.aishamahajan.com/call-girl-moti-bagh.php
this was really informing
Are you looking for a unique and unforgettable experience in Delhi? If so, hiring gorgeous and sophisticated Delhi escorts can be just the thing to make your stay in the city truly special. http://poojaescorts.in
https://igarg.in/ http://www.a1delhiescort.in/ https://www.escortdelhi.net/ https://www.dwarkaescortsgirls.com/
I need to express gratitude for lovely blog offering to us.https://igarg.in/ http://www.a1delhiescort.in/ https://www.escortdelhi.net/ https://www.dwarkaescortsgirls.com/
Saket is a location close to the clouds. Saket is renowned for its escort services as well. The greatest Escorts Services in Saket are what we're here to offer you. https://www.lustdelhi.com/saket-escorts/
Saket is a location close to the clouds. Saket is renowned for its escort services as well. The greatest Escorts Services in Saket are what we're here to offer you. Call us for outstanding escort services; if you're looking for the best and sexiest women in our agency, we have a variety of women for you. Selecting a Escorts In Saket is simple because we are here in Saket with numerous call girls for you.https://www.lustdelhi.com/saket-escorts/
It's trying to drop by gifted individuals concerning this matter in any event strong like you know what you're talking about! https://www.modelescortsindelhi.com/saket-call-girls-bipasha.html https://www.modelescortsindelhi.com/pitampura-call-girls-shakshi.html https://www.modelescortsindelhi.com/call-girls-in-bhiwadi.html https://www.modelescortsindelhi.com/call-girls-in-ghaziabad.html Much appreciated
I ought to thank you for the endeavors you have made recorded as a printed assembling this article. https://www.modelescortsindelhi.com/karol-bagh-call-girls-savita.html https://www.modelescortsindelhi.com/pondicherry-escorts.html https://www.modelescortsindelhi.com/pune-escorts-services.html https://www.modelescortsindelhi.com/sexy-call-girls-raipur.html https://www.modelescortsindelhi.com/call-girl-lucknow.html I'm trusting in a near best work from you in the future other than.
I could truly propose of ensured motivation for you a monster happen with the respected data you've here with this post. https://www.modelescortsindelhi.com/call-girls-south-ex-soniya.html https://www.modelescortsindelhi.com/green-park-call-girls.html https://www.modelescortsindelhi.com/call-girls-hauz-khas-napur.html https://www.modelescortsindelhi.com/call-girls-munirka-roma.html https://www.modelescortsindelhi.com/call-girls-in-okhla.html I'm returning to your site to get totally more soon
Appreciation for picking a standard a part to look at this, https://www.modelescortsindelhi.com/aerocity-escorts.html https://www.modelescortsindelhi.com/call-girls-paharganj-sana.html https://www.modelescortsindelhi.com/call-girls-in-rohini.html https://www.modelescortsindelhi.com/call-girls-south-delhi-meenu.html https://www.modelescortsindelhi.com/mahipalpur-escorts.html I have a stunning viewpoint toward it and love focusing in on more on this point. It is colossally key for me. Again appreciation for such a central help
Fair blog here! Correspondingly your site loads up astoundingly useful! https://www.bangaloreescortindia.co.in What web have could you at whatever point anytime finally say you are the use of? Will I'm getting your extra detail on your host? I wish my site piled up as focal as yours haha