#161: Test-Driven Development in the Age of AI with Clare Sudbery
Description
AI might write your code, but can you trust it to do it well? Clare Sudbery says: not without a safety net. In this episode, she explains how test-driven development is evolving in the age of AI, and why developers need to slow down, not speed up.
Overview
In this episode, Brian sits down with Clare Sudbery, experienced developer, TDD advocate, and all-around brilliant explainer, to unpack the evolving relationship between test-driven development and AI-generated code. From skeptical beginnings to cautiously optimistic experimentation, Clare shares how testing isn’t just still relevant, it might be more essential than ever.
They explore how TDD offers a safety net when using GenAI tools, the risks of blindly trusting AI output, and why treating AI like a helpful human is where many developers go wrong. Whether you’re an AI early adopter or still on the fence, this conversation will sharpen your thinking about quality, ethics, and the role of human judgment in modern software development.
References and resources mentioned in the show:
Clare Sudbery
Clare’s upcoming Software Architecture Gathering 2025 workshop
Clare at GOTO
AI Practice Prompts For Scrum Masters
#99: AI & Agile Learning with Hunter Hillegas
#117: How AI and Automation Are Redefining Success for Developers with Lance Dacy
Subscribe to the Agile Mentors Podcast
Want to get involved?
This show is designed for you, and we’d love your input.
- Enjoyed what you heard today? Please leave a rating and a review. It really helps, and we read every single one.
- Got an Agile subject you’d like us to discuss or a question that needs an answer? Share your thoughts with us at podcast@mountaingoatsoftware.com
This episode’s presenters are:
Brian Milner is a Certified Scrum Trainer®, Certified Scrum Professional®, Certified ScrumMaster®, and Certified Scrum Product Owner®, and host of the Agile Mentors Podcast training at Mountain Goat Software. He's passionate about making a difference in people's day-to-day work, influenced by his own experience of transitioning to Scrum and seeing improvements in work/life balance, honesty, respect, and the quality of work.
Clare Sudbery is an independent technical coach, conference speaker, and published novelist who helps teams rediscover their “geek joy” through better software practices. She writes and speaks widely on test‑driven development, ethical AI, and women in tech, bringing clarity, humor, and decades of hands‑on experience to every talk and workshop.
Auto-generated Transcript:
Brian Milner (00:00 )
Welcome in, Agile Mentors. We're back for another episode of the Agile Mentors Podcast. I'm here, as always, Brian Milner. But today, I have Miss Claire Sudbury with me. Welcome in, Claire.
Clare Sudbery (00:13 )
Hello.
Brian Milner (00:14 )
I'm so happy to have you here. is here with us because we wanted to talk about a topic that I think is going to be interesting to lot of people, and that is test-driven development, but not just test-driven development in light of AI and kind of the changes that AI has made to test-driven development. So why don't we start with just the base level test-driven development for people who have only heard kind of buzzwords around it and aren't as familiar with it, how would you explain test-driven development in sort of plain English?
Clare Sudbery (00:47 )
Okay, so the idea of test-driven development is that you want to be certain that your code works. And I'm sure most people will be familiar with the idea of writing tests around your code to prove that it works. But that principle is considered so important in test-driven development that we write the test before we write the code. And that's why we say that the development is driven by the tests. So the very starting point for any coding exercise is a test. Another really important part of this is that that test is tiny. So what we're not doing, and people might have heard of behavior-driven development, which is where you start with quite a big test where you say, I'm going to write a test that says that my thing should do this and the user should see a particular thing happen in a particular circumstance. In test driven development, the test is testing not what the user sees, but just what the code does in the tiniest, most granular way possible. So if you have a piece of your software that does some mathematical operations and you expect certain numbers to pop out the end, then you might say, just in this tiny bit of this calculation, this number should be multiplied by four. So you're not even necessarily saying that given these inputs, you should get these outputs. I mean, you may have tests that say that, but you're just testing that something gets multiplied by four. And that's just an example. But what you're doing is you're thinking, what is the tiniest possible thing that I can test? And you write a test that tests that tiny thing. And you do that before you've written the code. So obviously, the test fails initially, because you haven't even written the code yet. And that's another important part of the process. You want to see it fail because you want to know that when you then make it pass, the reason it's passing is because of something you did. And that means that every tiny little bit of code you write is proven because it makes a test pass. And when you get into the rhythm of it, it means you're constantly looking for green tests. And there are lots of other things I could talk about. Like for instance, you never want those tests to fail. So if at any point any of them start to fail, you know that that's because something you just did made them fail, which also means that you want to run them consistently every time you make any changes. So you're getting that fast feedback. You're finding out not only whether what you've just written works because it makes its test pass, but also that it's not making any other tests fail. So not only does it work within its own terms, but it hasn't broken anything else. And that's actually really common when you're coding is that some new thing that you add breaks some existing thing. So you're constantly paying attention to those tests and you're making sure that they pass. And it drives the development in a very interesting way because you're always talking about what should work. You always think about what should work. You always think about how it should work. You're moving in tiny, tiny steps. So you're gradually, gradually, gradually increasing the functionality and whether it works or not and how it works is being determined by the fact that you're making tests passed. And the really interesting thing is that it actually helps you to design software as well as to make sure that software works. So hopefully that explained it.
Brian Milner (04:10 )
That's an awesome explanation. really appreciate that. That was a great kind of practical, plain English explanation of it. I love it. So for the people who weren't familiar, now you have kind of a good idea of what we mean by test-driven development. I know with the advent of AI, there's been lots of changes that have taken place, lots of changes in the way that developers create their code. We now have these sort of co-pilots, assistants that help in doing our coding. But on the other hand, one of the things you hear quite often is that there's lots and lots of quality issues, that it takes a lot of effort to try to maintain that quality and make sure that it's still at a high level. So how does AI enter the picture of test-driven development? How is that helping? How is it changing the way that we do test-driven development?
Clare Sudbery (04:59 )
It's a very good question and there are lots of different strands to how I can answer it. And I think it's probably important that I start by saying, I came to this from a position of deep skepticism. So I have been sitting on the sidelines for a long time, watching the AI explosion happen and not actually getting very involved. But what I did find was that It was becoming like a tennis match. I was like just going, okay. And they say that and they say that. And it actually became very interesting to me just how polarizing it could be. You know, that there were people within my networks, people who had a lot of respect for, who were very anti or who are very anti and who also are very pro. People who've been experimenting with it and having a lot of fun with it. But one of the big issues that I didn't even have to be told I could guess would occur and has occurred is exactly what you said that the code that is generated by GenAI coding tools is often not reliable. And it's not reliable for the same reason that when you ask ChatGPT a question, the answer you get is often not reliable. And that's because these things are not deterministic. They the way that they're constructed. mean, people might remember a long time ago, people used to talk about fuzzy logic. It's all a bit



