DiscoverThe Evil Tester Show - Software Testing and Development with Attitude
The Evil Tester Show - Software Testing and Development with Attitude
Claim Ownership

The Evil Tester Show - Software Testing and Development with Attitude

Author: Alan Richardson - Software Testing and Development Consultant

Subscribed: 74Played: 292
Share

Description

Software Testing expertise for everyone. This software testing podcast helps developers, testers, managers, Product and QA professionals understand and improve their software testing and development approach.



Software Testing is a skill that can be treated as a specialism or developed as part of a broader Software Development Role. This podcast helps everyone their skills in Test Management, Risk Management, Unit Testing, Test Techniques, Architecture and Development.



The show covers topics like: Software Testing, Exploratory Testing, Test Automation, Test Management, Software Development and Programming.



Hosted by Alan Richardson, an experienced Software Developer and Consultant, we cover Software Testing and Development from a practical and experience based viewpoint.



Occasional special guests bring their expertise and experience to help listeners improve their Software Testing and Development processes.

29 Episodes
Reverse
Based on my experience with AI, am I optimistic or pessimistic. I gain huge value from AI during development, but have I managed the same in Testing? And how will the Tester role change, what do we need to do to adapt? I look forward to learning more and describe my next steps. 00:00 - Introduction: Am I an AI Optimist or Pessimist? 02:16 - AI, Jobs, and Management Excuses 06:33 - How AI Changes Our Roles 10:49 - Human Connection vs. AI: Where I Refuse to Use It 16:27 - Building and Using AI Tools for Programming 22:46 - Automation, Testing, and the Human Factor 28:21 - The Future: Agentic AI, Fundamentals, and Looking Forward
The answers given during a Browserstack Community AMA session held on Discord on the 11th of December 2025, following a live LinkedIn video stream. The session focused on "Mastering Automatability for Test Automation". The main theme is the concept of Automatability, which I view as the ability to automate, this personal skill is more critical than reliance on specific tools. The discussion covers various topics, including how to separate automation problems from application design issues, dealing with slow UIs and non-automation friendly third-party widgets, evaluating automation readiness, and addressing common architectural failings related to large-scale UI automation. 00:00:00 Introduction 00:01:27 key early lesson about automatability?   00:01:56 separating automation issues vs. design issues?   00:03:49 is slow UI a testability or automatability problem?   00:06:50 handling non-automatable third-party widgets?   00:09:20 assessing automation readiness - any framework?   00:11:23 common architectural patterns that break at scale?   00:13:37 prioritizing testability vs. automation in sprints?   00:16:51 do modern tools reduce the need for good design?   00:19:32 explaining automatability as an investment?   00:21:44 how do AI agents handle dynamic/third-party elements?   00:23:17 early signs a feature will be flaky when automated?   00:26:10 which microservice layers to automate first?   00:29:16 high-ROI automatability fixes for small budgets?   00:30:55 early dev–test collaboration to prevent rework?   00:34:08 thinking about automatability in continuous delivery? Join the BrowserStack Discord community and discover more AMA sessions https://www.browserstack.com/community
Should you use AI to help you migrate test automation code? And what should you actually migrate, the tests coverage hasn't changed. In this episode we discus show abstractions and AI can be used to migrate... and discuss when you shouldn't. Welcome to The Evil Tester Show! In this episode, host Alan Richardson dives into the complex world of test automation migrations. Have you ever wondered what it really takes to move your automated test execution code from one tool or language to another—like switching from WebDriver to Playwright, or migrating from Java to TypeScript? Alan breaks down the pitfalls, challenges, and best practices you need to consider before taking the leap. He explains why migrating isn’t just about copying test cases, how abstraction layers can save you time and headaches, and why using AI and solid design principles can streamline your transition. Whether you’re facing unsupported tools, evolving frameworks, or strategic changes in your testing approach, this episode offers practical advice to plan and execute a seamless migration—without burying new problems beneath old ones. 00:00 Migration Challenges 02:43 Tool Evaluation 04:05 Migrating to Playwright: Considerations 06:00 Migration Process 06:25 Migrate: Easy First, Hardest Next 09:37 Effective Migration Strategies for Tests 10:23 Focusing Abstractions 14:39 Optimize Test Code Migration 15:44 Focus on Abstraction, Not Auto-Healing **1. Why Migrate—And When You Really Shouldn’t** Before any big move, Alan urges teams to get their “why” straight. Is your current tool unsupported? Is your framework truly incompatible, or are you missing some hidden potential? Migrate for the right reasons and make sure your decision isn’t just papering over problems that could follow you to the next tool. **2. Don’t Confuse Migration with a Rewrite** Too many teams treat migration like a rewrite—often with disastrous results. Alan emphasizes the importance of planning ahead, solving existing flakiness and coverage issues _before_ you move, and carefully evaluating all options (not just the shiny new tool you think you want). **3. The Secret Weapon: Abstraction Layers** The podcast’s biggest takeaway: Don’t migrate “test cases”—migrate _abstractions_. If your tests are full of direct calls like `webdriver.openPage()`, you’ve got work to do. Build out robust abstraction layers (think page objects or logical user flows) and keep your tests clean. When it comes time to migrate, you’ll only need to move those underlying layers, not thousands of individual test case scripts. **4. Taming Flakiness and the Risks of Retries** Migration is not the time to rely on self-healing tests or retries. Any test flakiness _must_ be rooted out and fixed before porting code. Bringing instability into a new stack only multiplies headaches later. **5. Harnessing AI—But Stay in Control** AI-assisted migration really shines at mapping old code to new, but Alan warns against “agentic” (hands-off) approaches. Use AI as a powerful tool, not as the driver—you need understanding and control to ensure things work reliably in CI/CD pipelines. **6. Learn Fast: Tackle the Hardest Stuff Early** Pro tip: Once you’re ready, start your migration with the simplest test, just to get going—then dive into the hardest, flakiest, most complex workflows. You’ll uncover potential blockers early and kick-start team learning. “We’re not migrating test cases when we change a tool. We’re migrating the physical interaction layer with our application... ”
Should you have an online portfolio showcasing your Software Development and Testing skills to help get a job? It really depends on the recruitment process. But... if I'm recruiting, and you have a profile then I will have looked at it. So it better be good. Most Software Developers and Testers don't have public portfolios so that means you can really stand out. We'll cover the difference between different types of projects: A breakdown of project types: Learning Projects, Personal Projects, Portfolio Projects. Lots of tips on how to adjust your Github profile and promote your projects. 00:00 Value of Portfolio 02:59 Stand Out Skills 09:19 Project Types 12:27 Showcase Projects 19:39 Promoting Yourself 21:44 Final Advice
Software Testing deserves respect. Doesn't it? But so does every role in Software Development: managers, testers, QA, programmers, Product, Everyone. This is for you. Ever feel like you’re not getting the respect that you deserve in your job? This episode dives deep into the topic of Respect in tech, especially focusing on software testing versus programming. We look at why some roles seem to earn more respect, what that means for workplace culture, and how you can change things for yourself and your team. Respect isn’t just about manners or titles - it’s about how the system works and how we show up in our roles. If you’ve worked in agile projects, you might have heard, "Everyone is a developer." But some roles seem to get more recognition than others. Is this because of how we define our jobs, or is it just baked into the way our workplaces run? This episode is a call to action, urging everyone to look at respect both at a personal, process and craft level. We’re breaking down the difference between self-respect, respect for others, and respect built into your team’s process. You'll see why just doing your job isn’t enough. You have to own your craft, communicate what you do, and make your contributions visible to earn genuine respect. By the end of this episode, you'll have practical steps to make respect part of your daily work, whether you’re writing code, testing, building products, or managing. 00:00 Respect Dilemma 02:41 Human Level Respect 06:31 Self-Respect First 10:17 Respect Cycle 15:37 Knowledge Sharing 18:53 Respectful Organizations 21:26 Final Thoughts
Software Testing typically confuses a Test Strategy Document with the process of strategising. Alan Richardson simplifies the over complicated world of test strategy. Drawing on years of experience creating test strategies and plans, Alan explains the real difference between strategy, approach, and plan. Explaining that what really matters isn’t following templates or writing elaborate documents, but actually thinking through problems, understanding risks, and communicating those ideas clearly.
Are you trying to figure out how to break into the software testing job market or make your next big move? This episode of the Evil Tester Show dives deep into the realities of tech recruitment, job search strategies, and career planning for testers - with recruitment veteran Jack Cole from WEDOTech.uk - Whether you're an experienced Test manager, expert Tester, junior QA or even a programmer, Jack’s decades of Software Testing and Development industry experience will give you strategies and tips about what works in today’s competitive job seeking world. In this packed hour-long conversation, we cover everything from market trends, LinkedIn networking, and the recruitment pipeline, to building a career roadmap and even the AI hype machine. Grab your notebook, settle in, and get ready for real insights you can use – plus a few stories from the trenches and actionable tips for every step of your job hunt.
Software Testing is a skill and like all skills require practice, that's what makes you a practitioner of Software Testing. In this episode we're diving into the world of practice with the James Lyndsay. In this conversation, your host Alan Richardson chats with James about the essence of practice in software testing, exploring how exercises and real-world scenarios can enrich our skills. James shares insights on his weekly online practice sessions and the interactive Test Lab concept, offering a dynamic playground for testers. Discover how practice blends with rehearsal and learning, and delve into the intriguing intersection of testing and development. With firsthand experiences in software experiments, fencing, and scientific investigation, James and Alan discuss the art of modeling and exploring software systems. Whether you're refining your testing techniques or embracing new perspectives with AI, this episode offers a wealth of wisdom for testers at all levels. Join us as we learn, laugh, and explore the world of testing practice. We hope you find inspiration for your own practice sessions. Don't forget to check out James's resources at https://workroom-productions.com for more testing challenges and exercises.
Effective Software Testing is highly contextual: we adapt what we do to the project and the process. In this episode of The Evil Tester Show, host Alan Richardson describes context-driven testing. Is there really such a thing as context-driven testing, or is it just a phrase we use to describe our testing approach? Alan explores the intricacies of context in testing, discussing its evolving nature, the impact of context on testing practices, and the challenges in defining it. From the origins of the term by James Bach, Brian Marick, Brett Petichord, and Cem Kaner, to Alan’s personal insights on systems within systems and how context impacts our testing methodologies, this episode provides a comprehensive look at how context affects software testing. Alan also critiques the principles of context-driven testing and emphasizes the importance of adapting to projects without being swayed by ideologies. We explore how to navigate context in testing environments, adapt our approaches, and effectively challenge and evolve systems. Discover the importance of context-driven testing in software development, exploring models, adaptability, and useful practices.
Software Testing and Development professionals often mention the Test Automation pyramid when describing Test Autoamtion. Let's do a deep dive and explore what that means. This episode covers the Test Automation Pyramid, created by Mike Cohen in 2008-2009 in the book "Succeeding With Agile". We will go beyond the diagram and look at the model that supports it. Then deep dive into the model to explore it's meaning in relation to Automated Execution Coverage, not Testing. - The model was created by Mike Cohen in 2008-2009 in the book "Succeeding With Agile." - The original model focused on UI, service level, and unit level automation. - Over the years, different interpretations and variations of the model have emerged. - The term "service level" in the model has led to ambiguity and different interpretations. - The diagram in the model is a simplified representation of a deeper underlying model. - The focus should be on achieving coverage at the most appropriate level in the system. - The model addresses the importance of avoiding duplication and redundancy in automated coverage. - The process and team structure can impact the effectiveness of the model. - The model can be reframed as an automated execution coverage pyramid.
Software Testing is often taught with a focus on Test Cases and Test Scripts. But this is the 2025, we don't test like it's 1984 any more. Let's revisit how we model testing and create a more effective test process. The most common Testing Entities that I've encountered over the years are: Test ConditionTest ScenarioTest CaseTest Script In my Testing I pretty much now have: Test IdeaTest Log (Execution Log) Is there any value in the original Testing Entities? There might be if we view them as logical concepts, but not really as physical artifacts.
Software Testing often looks at the exactitude of language. We look for ambiguity in Software Requirements, we find gaps in Specifications, so it's no wonder we explore the use of terms and definitions in our own Software Testing craft. Top 3 Phrases that should carry trigger warnings for the Test Community: - Manual Testing - Test Automation - Quality Assurance In this episode I'm going to talk about Manual Testing. So that's your Trigger Warning. And I'm going to talk about what we might want to say instead of "Manual Testing"
Manual QA is Dead

Manual QA is Dead

2022-08-2522:38

Software Testing is going through a time of change. This episode explores how we can adapt. Manual QA is dead. Companies are getting rid of their QA teams. Quality Control performed manually is phasing out of style. What can we do instead? Become coaches, assistants or advisors? We could become developers? Or we could be better testers. QA and Quality Control is not Testing. This might be how people learn.
Test Automation Biases

Test Automation Biases

2022-02-1124:12

Software Testing is highly contextual and requires flexibility in our approach and thinking. But we are only human and sometimes we have beliefs and biases that limit our approach. In episode 16 of The Eviltester Show we are looking at Test Automation Biases, what they are, how to avoid them, and how to evaluate your own biases. 00:00 What is an Automation Bias? 00:29 Treat all opinions as biased 01:50 Automating a GUI is Slow and Flaky 03:36 Automate through the API 05:23 Code Free Automating is Bad 06:17 Tool X is Better than Y 07:04 Postman is Better than Insomnia 08:54 Python is Better than Java 10:24 Seeing Through Biases 12:52 Try it, then decide 13:17 Page Objects vs Screenplay 14:52 Take Responsibility 16:11 External Experience 17:02 Start Small, Make Progress 17:50 Do not ignore issues 20:34 Be Real rather than Believe 21:18 Keep Options Open 22:48 Be Aware of your biases 23:19 It takes time
Software Testing is often a proprietary process where we never share how we are doing it. Governmens are an exception because they make public a lot of documents explaining what they do and how they do it. In this episode we have a discussion of a UK government guide to exploratory testing. https://www.gov.uk/service-manual/technology/exploratory-testing The aim is not to criticise the document, the aim is to use it as a base from which to see how far our thoughts diverge or overlap with the document. I want to know what I can learn. Show notes are here: https://www.eviltester.com/show/015-exploratory-testing-gov/ This podcast, was originally released to Patreon supporters in January of 2019, https://patreon.com/posts/what-is-testing-23907385 I read through the document and try to understand the intent and meaning behind statements and offer my own thoughts.
Recruiting Testers

Recruiting Testers

2020-08-1711:43

Software Testing is a hard activity to recruit for,.Recruiting Testers is hard. What we want to do is minimise the friction and barriers. In this podcast I'll share some tips on what has worked for me in the past. - remove as much ambiguity from your job spec as possible   - someone like me, will read 'the worst' into your ambiguity - send clear messages, not mixed messages, in the job spec - help the applicant filter out roles that are not suitable for them - when writing a job spec, review existing job specs in the world, critique them and then build on what is best from those - start with a phone interview, never start face to face - some people bluff, some people offer general answers   - your job as an interviewer is to help them answer specifically and ask for more information when you want it - Interviews as auditions, held by someone who knows how to do the activity being auditioned - audition hands on with the actual code, or the actual system - use your best people for the interviews, not just anyone that is free Show Notes: https://eviltester.com/show We have more posts in our recruitment category: - https://www.eviltester.com/categories/recruitment/ And our Career category: - https://www.eviltester.com/categories/career-advice/ This video was originally released as a Patreon exclusive video back in January 2019 - https://www.patreon.com/posts/recruitment-tips-23896867
What can someone in Software Testing do to improve their chances of getting a job? That's what we consider in Episode 013 of The Evil Tester Show covers "How To Get a Job in Software Testing". - Getting a Job is different from Doing the Job - Persistence is key to getting a job - Build an online portfolio on LinkedIn - Apply for jobs slightly out of reach - Amend your CV for each job that you really want - You do not need to code to Test Software - You might need coding skills on your CV to pass the interview filter - Demonstrate that you can test software, even if you are looking for your first job in software testing. Shownotes are available from https://eviltester.com/show This is an audio/video episode and the video version is available on youtube: https://youtu.be/QB8n5RXTV10 Direct Show Notes Link: https://www.eviltester.com/show/013-how-to-get-a-job-in-testing/
Ever wanted to talk at a Software Testing conference? Or even just be effective when you present over Zoom. This episode is for you. More and more conferences are going online. Speakers will have to adapt. In this post I outline some recommendations based on years of online video, webinar and course creation. If you only want the summary then... - Presenting online over video is different. - Practice will help. - Practice with the kit you are going to use, and perform complete practice run throughs. - Use an external microphone. - Record your practice sessions and listen back to them to make sure your audio is good enough.  For show notes, visit https://eviltester.com/show
Software Testing often doesn't get the respect it deserves so people wonder if a career in Software Testing is worth it. I do receive a lot of emails asking which way should people focus their career - Testing? Programming? Automating? And that's what I cover in this podcast. Points: - we should not have to think like this - But - we need to get a job - Getting a job is different from "What should I be..." A full transcript of the show has been upload for Patreon supporters. https://www.patreon.com/posts/35209341 You can support this Podcast, and receive almost daily blog posts by signing up to https://Patreon.com/EvilTester for as little as $1 a month. Show notes are available at https://eviltester.com/show
Software Testing is not the same as Test Automation. But so many people seem to think so. I receive a lot of concerned emails and messages from testers. People concerned that their testing skills will not be enough to keep them in work and that the future revolves around automation which they don't know how to do. No programming experience. And should they learn API or UI? What programming language should they learn? People do feel that their career is at risk. And that's what I cover in this podcast to hopefully give you some tips if you do want to learn to program or code and start working on test automation, and some tips on what to do if you don't and think that software testers should not have to learn to code. There are many paths to success in testing. Show notes available at: https://eviltester.com/show    
loading
Comments