DiscoverSoftware Testing Unleashed - QA, DevEx & Quality Engineering
Software Testing Unleashed - QA, DevEx & Quality Engineering
Claim Ownership

Software Testing Unleashed - QA, DevEx & Quality Engineering

Author: Richard Seidl | Software Development & Testing Expert

Subscribed: 43Played: 293
Share

Description

Software testing is no longer just a phase—it’s the foundation of modern engineering and your ultimate competitive advantage.

Welcome to Software Testing Unleashed, the weekly podcast for anyone dedicated to building better software, faster. Hosted by Richard Seidl, renowned expert in software development and testing, this show is your backstage pass to the tools, tactics, and trends defining the next era of Quality Engineering.

Whether you are a QA Engineer, SDET, Developer, or Tech Leader, each week we bring you field-tested insights from the brightest minds in the software universe to answer the industry’s toughest questions:

- Smart Automation: When should you automate, and when is it a trap?
- AI & ML in Testing: How do you maintain quality in a world of non-deterministic code?
- The "How Much" Dilemma: How much testing is actually enough for your specific scale?
- Architecture & DevEx: What makes a great integration test and how do you improve developer experience?

From scaling QA strategies in enterprise projects to building your very first test suite, we bridge the gap between complex theory and practical execution. We dive deep into CI/CD, Cloud-native complexity, and the future of manual vs. automated testing.

🚀 Ready to unleash the next level of quality? Hit play, subscribe, and join a global movement of software professionals shipping with confidence.
37 Episodes
Reverse
In this episode, I talk with Laveena Ramchandani about thought leadership in testing and the changing role of testers. Laveena sees testers as engineers who lead by example, ask smart questions, and break silos. She coaches teams to share knowledge, speak up, and aim for team goals, not vanity KPIs. We touch hard calls too, like stepping in or reshaping a team when delivery slips. On AI, we agree to use the tools, then add human sense and the feel of quality, like accessibility and emotion. Testing stays very human.
In this episode, I talk with Pekka Klärck about Robot Framework. We start with 2004, his thesis roots, and Nokia Networks turning a prototype into an open source project in 2008. He explains the core idea: a generic engine with reusable libraries, human readable tests, and one set of reports. Best fit in mixed tech stacks. We revisit milestones like the move to plain text, a new parser, and a thriving ecosystem. Pekka previews secret variables in 7.4, a modern user guide, markdown docs, and a cleaner namespace with backward compatibility. He even tests Robot Framework with Robot Framework.
In this episode, I talk with Martijn Goossens about DevEx, DORA, and how we put the Q into developer experience. We walk through the four DORA metrics and where testers make real impact with CI, smart coverage, and fast feedback. Martijn shares a simple fix that unlocked speed: give each team a test environment. We explore coaching with small experiments, clear metrics, and regular check ins. Start with the State of DevOps report. Map your QA work to these metrics. Speak value, stay visible, and grow with your team and community.
In this episode, I talk with Florian Fieber about what 2025 taught testers and how to get ready for 2026. AI boosts productivity, it does not replace us. The sweet spot is generation of artifacts like test ideas, cases, scripts, and data. Accessibility grew due to the EU AI act, yet many underestimate the work. A plugin is not enough. You need manual checks and early design. For 2026 we expect agentic AI and a pilot role for testers. AI literacy becomes company wide.
In this episode, I talk with Yuliia Pieskova about informal networks in software teams. We explore how spontaneous ties lift trust, speed, and quality in remote and hybrid setups. Formal charts set limits, people move work through friends. Yuliia shares stories from startups, hackathons, and product discovery where cross team groups watch users, swap ideas, then return with shared context. Remote work exposes old cracks yet levels locations and opens doors for new links.
In this episode, I talk with Tibor Csöndes about how testing grew up and where it goes next. We recorded live at HUSTEF in Budapest, a conference he helped shape. Tibor shares telco roots where automation was normal. Tools change, thinking stays. He sees AI as a third wave after CATG and model based testing. Helpful, not a job thief. Use it, or the testers who do will take your seat. ISTQB gave us a common language across industries. Learn the basics, automation, AI, and the human stuff like clear messages and critical thinking.
In this episode, I talk with Michał Buczko about leading remote teams, trust, and AI. We spoke about clear calendars for open help sessions, regular updates to management by email, and the art of celebrating wins without bragging. We also spoke about sharing failures. That builds trust and can unlock help. Treat AI like a tool on your belt. Use it to amplify testers and developers, not to replace them. Stay critical and ask tech people first.
In this episode, I talk with Péter Földházi about test automation that solves real problems, not shiny tools. Péter brings two decades in quality and helped write the ISTQB automation syllabi. We ask why to automate, where it fits, and how the test pyramid guides choices across unit, API, and UI. I like how the simple pyramid makes choices visible. He shares a gaming case with 5,000 defects and a velocity drop. Strategy first, then tools, six month steps, and clear value.
In this episode, I talk with Leandro Melendez about how performance testing changed in the last20 years. Live at HUSTEF, we swap stories from bare metal and heavy browser scripts to APIs, cloud, and Kubernetes. Leandro draws a clear line between performance and load testing. Do not run Black Friday tests every sprint. Watch production, use canaries, and learn from real users. He pushes observability first. Build dashboards, instrument early, and think about cost.
In this episode, I talk with Attila Fekete about HUSTEF 2025 in Budapest. He runs the program and the backstage work. We look at how a small local meet up from 2011 turned into 700 people from many countries. Care for people, high quality talks, and a fun vibe. We discuss new formats like longer talks, a master class track, and a career clinic with coaching and CV tips - and that first time speakers get mentoring too.
In this episode, I talk with Cassandra H. Leung about why testers still feel unseen and what we can do about it. We unpack impostor syndrome, the shy voice that says keep quiet, and how it holds many of us back. Cassandra shares a simple frame: show, share, shine. Put testing work on the board, share notes and dashboards, and keep a brag board for wins. We explore the wider role of testers across product talks, pipelines, and coaching the team.
In this episode, I talk with Maryse Meinen about stoic thinking for product development and life. We ask what happens if you stop judging success by outcomes and start judging by decision quality. Maryse shares tools you can use today: scenario planning, the 10 10 10 rule, and a simple decision journal. Prepare for failure, accept what you cannot control, and act with courage, justice, and temperance. This fits agile work and the mess we face in tech and society.
In this episode, I talk with Barış Sarıalioğlu about testing as art and science, through the lens of Leonardo da Vinci. We ask what a tester can learn from curiosity, observation, and experiments. Mona Lisa's smile shows how uncertainty beats 100 pages of metrics. We should aim for understanding, not bug counts. We talk about storytelling, simple reports that people can read, and mixing engineering with empathy. Testers work across disciplines, explore, and make sense of messy projects. Perfection is a trap. Good enough can be great. Balance logic and imagination, and you get impact that reaches beyond tools.
In this episode, I talk with Clara Ramos González about how self-care can raise quality and agility. We look at why communication failure still breaks projects and how breath can fix more than tools. Clara blends QA leadership with yoga and brings simple rituals to teams. Three deep breaths to open meetings. One word to set intention. Weekly coffee talks without work. A feedback rule to sleep on it. The message is clear. Bring your whole self. Lead by example. Small steps cut stress and help us build better software and healthier teams.
In this episode, I talk with Mesut Durukal about picking the right end to end test automation framework. Mesut shares why tool choice must serve real needs, not trends. It is a mindset shift from hype to needs. In his case users were on Safari, the team tool did not run there. He mapped needs, compared Cypress, Playwright, Selenium, TestCafe, and Nightwatch, and chose Playwright for speed and broad browser support. We talk about reporting, debugging, and docs. We touch on architecture, like keeping login and helpers outside specs, so migration stays clean. For me, this is tech with agility. Know your goals, grow your system, and review choices often.
In this episode, I talk with Chris Armstrong about context in testing. We talked about why "it depends" is an honest answer in complex work. Chris shows how decisive humility helps. Say what you do not know. Find the people and data to learn fast. We talk about fear, optimism, and why winners collect more failures. I ask how testers grow influence. We land on trust, social skills, and asking better questions. Challenge tools and processes with respect. Start small with clear hypotheses and visible outcomes. Remove unnecessary friction. AI comes up as a fresh field for testing. Join early, shape it. Stay curious. Context moves, and so should we.
In this episode, I talk with Gáspár Nagy about behavior driven development. We look at why a simple example can beat a specification. You do not learn soccer from a rulebook. You learn by playing and watching plays. BDD uses the same trick to build understanding early. We discuss example mapping, writing readable scenarios, and turning them into executable specs with Cucumber, SpecFlow, and Reqnroll. Done well, this guides vertical slices, shows progress, and stops the mini waterfall at the end of a sprint.
In this episode, I talk with Daniel Knott about the real pains in testing and what comes next. Why do managers cut quality when money gets tight. We look at AI and low code that spit out apps fast, often without clear architecture. We warn about skipping performance and security. We also reflect on how testers can sell value in business terms. Speak revenue, KPIs, and user happiness, not code coverage. Daniel says domain knowledge may beat deep coding as AI writes more code. We explore prompt reviews as a new shift left habit.
In this episode, I talk with Kat Obring about the tester as an influencer. We explore how to stop saying everything is broken and start speaking the language of stakeholders. Bring evidence, not opinions. Say "the Safari sign up button fails and 20 percent of users are blocked". We share a 15 second check before stand up, and pairing early so testing is part of development, not a mini waterfall at the end. Pick small battles and run one or two week experiments. If it works, keep it. If not, drop it. Influence without authority grows from trust and habits.
In this episode, I talk with Maciej Wyrodek about moving from Cypress to Playwright. We talked about why Cypress started to work against the team: opinionated style, plugin churn, iFrames, flaky screenshots, and a pricing wall around parallel runs. Maciej's answer was a hands on hackathon with devs and testers. Playwright won. The migration starts with their top 10 flows and production smoke checks.
loading
Comments