DiscoverM365 Show with Mirko Peters - Microsoft 365 Digital Workplace DailyUnit vs. Integration vs. Front-End: The Testing Face-Off
Unit vs. Integration vs. Front-End: The Testing Face-Off

Unit vs. Integration vs. Front-End: The Testing Face-Off

Update: 2025-09-14
Share

Description

Ever fix a single line of code, deploy it, and suddenly two other features break that had nothing to do with your change? It happens more often than teams admit. Quick question before we get started—drop a comment below and tell me which layer of testing you actually trust the most. I’m curious to see where you stand. By the end of this podcast, you’ll see a live example of a small Azure code change that breaks production, and how three test layers—Unit, Integration, and Front-End—each could have stopped it. Let’s start with how that so-called safe change quietly unravels.

The Domino Effect of a 'Safe' Code Change

Picture making a tiny adjustment in your Azure Function—a single null check—and pushing it live. Hours later, three separate customer-facing features fail. On its face, everything seemed safe. Your pipeline tests all passed, the build went green, and the deployment sailed through without a hitch. Then the complaints start rolling in: broken orders, delayed notifications, missing pages. That’s the domino effect of a “safe” code change. Later in the video we’ll show the actual code diff that triggered this, along with how CI happily let it through while production users paid the price. Even a small conditional update can send ripples throughout your system. Back-end functions don’t exist in isolation. They hand off work to APIs, queue messages, and rely on services you don’t fully control. A small logic shift in one method may unknowingly break the assumptions another component depends on. In Azure especially, where applications are built from smaller services designed to scale on their own, flexibility comes with interdependence. One minor change in your code can cascade more widely than you expect. The confusion deepens when you’ve done your due diligence with unit tests. Locally, every test passes. Reports come back clean. From a developer’s chair, the update looks airtight. But production tells a different story. Users engage with the entire system, not just the isolated logic each test covered. That’s where mismatched expectations creep in. Unit tests can verify that one method returns the right value, but they don’t account for message handling, timing issues, or external integrations in a distributed environment. Let’s go back to that e-commerce example. You refactor an order processing function to streamline duplicate logic and add that null check. In local unit tests, everything checks out: totals calculate correctly, and return values line up. It all looks good. But in production, when the function tries to serialize the processed order for the queue, a subtle error forces it to exit early. No clear exception, no immediate log entry, nothing obvious in real time. The message never reaches the payment or notification service. From the customer’s perspective, the cart clears, but no confirmation arrives. Support lines light up, and suddenly your neat refactor has shut down one of the most critical workflows. That’s not a one-off scenario. Any chained dependency—authentication, payments, reporting—faces the same risk. In highly modular Azure solutions, each service depends on others behaving exactly as expected. On their own, each module looks fine. Together, they form a structure where weakness in one part destabilizes the rest. A single faulty brick, even if solid by itself, can put pressure on the entire tower. After describing this kind of failure, this is exactly where I recommend showing a short code demo or screenshot. Walk through the diff that looked harmless, then reveal how the system reacts when it hits live traffic. That shift from theory to tangible proof helps connect the dots. Now, monitoring might eventually highlight a problem like this—but those signals don’t always come fast or clear. Subtle logic regressions often reveal themselves only under real user load. Teams I’ve worked with have seen this firsthand: the system appears stable until customer behavior triggers edge cases you didn’t consider. When that happens, users become your detectors, and by then you’re already firefighting. Relying on that reactive loop erodes confidence and slows delivery. This is where different layers of testing show their value. They exist to expose risks before users stumble across them. The same defect could be surfaced three different ways—by verifying logic in isolation, checking how components talk to each other, or simulating a customer’s path through the app. Knowing which layer can stop a given bug early is critical to breaking the cycle of late-night patching and frustrated users. Which brings us to our starting point in that chain. If there’s one safeguard designed to catch problems right where they’re introduced, it’s unit tests. They confirm that your smallest logic decisions behave as written, long before the code ever leaves your editor. But here’s the catch: that level of focus is both their strength and their limit.

Unit Tests: The First Line of Defense

Unit tests are that first safety net developers rely on. They catch many small mistakes right at the code level—before anything ever leaves your editor or local build. In the Azure world, where applications are often stitched together with Functions, microservices, and APIs, these tests are the earliest chance to validate logic quickly and cheaply. They target very specific pieces of code and run in seconds, giving you almost immediate feedback on whether a line of logic behaves as intended. The job of a unit test is straightforward: isolate a block of logic and confirm it behaves correctly under different conditions. With an Azure Function, that might mean checking that a calculation returns the right value given different inputs, or ensuring an error path responds properly when a bad payload comes in. They don’t reach into Cosmos DB, Service Bus, or the front end. They stay inside the bounded context of a single method or function call. Keeping that scope narrow makes them fast to write, fast to run, and practical to execute dozens or hundreds of times a day—this is why they’re considered the first line of defense. For developers, the value of unit tests usually falls into three clear habits. First, keep them fast—tests that run in seconds give you immediate confidence. Second, isolate your logic—don’t mix in external calls or dependencies, or you’ll blur the purpose. And third, assert edge cases—null inputs, empty collections, or odd numerical values are where bugs often hide. Practicing those three steps keeps mistakes from slipping through unnoticed during everyday coding. Here’s a concrete example you’ll actually see later in our demo. Imagine writing a small xUnit test that feeds order totals into a tax calculation function. You set up a few sample values, assert that the percentages are applied correctly, and make sure rounding behaves the way you expect. It’s simple, but incredibly powerful. That one test proves your function does what it’s written to do. Run a dozen variations, and you’ve practically bulletproofed that tiny piece of logic against the most common mistakes a developer might introduce. But the catch is always scope. Unit tests prove correctness in isolation, not in interaction with other services. So a function that calculates tax values may pass beautifully in local tests. Then, when the function starts pulling live tax rules from Cosmos DB, a slight schema mismatch instantly produces runtime errors. Your unit tests weren’t designed to know about serialization quirks or external API assumptions. They did their job—and nothing more. That’s why treating unit tests as the whole solution is misleading. Passing tests aren’t evidence that your app will work across distributed services; they only confirm that internal logic works when fed controlled inputs. A quick analogy helps make this clear. Imagine checking a single Lego brick for cracks. The brick is fine. But a working bridge needs hundreds of those bricks to interlock correctly under weight. A single-brick test can’t promise the bridge won’t buckle once it’s assembled. Developers fall into this false sense of completeness all the time, which leaves gaps between what tests prove and what users actually experience. Still, dismissing unit tests because of their limits misses the point. They shine exactly because of their speed, cost, and efficiency. An Azure developer can run a suite of unit tests locally and immediately detect null reference issues, broken arithmetic, or mishandled error branches before shipping upstream. That instant awareness spares both time and expensive CI resources. Imagine catching a bad null check in seconds instead of debugging a failed pipeline hours later. That is the payoff of a healthy unit test suite. What unit tests are not designed to provide is end-to-end safety. They won’t surface problems with tokens expiring, configuration mismatches, message routing rules, or cross-service timing. Expecting that level of assurance is like expecting a smoke detector to protect you from a burst pipe. Both are valuable warnings, but they solve very different problems. A reliable testing strategy recognizes the difference and uses the right tool for each risk. So yes, unit tests are essential. They form the base layer by ensuring the most basic building blocks of your application behave correctly. But once those blocks start engaging with queues, databases, and APIs, the risk multiplies in ways unit tests can’t address. That’s when you need a different kind of test—one designed not to check a single brick, but to verify the system holds up once pieces connect.

Integration Tests: Where Things Get Messy

Why does code that clears every unit test sometimes fail the moment it talks to real services? That’s the territory of integration testing. These tests aren’t about verifying a single function’s math—they’re about making sure your components actually work once they exchange data with the infrastructure t

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Unit vs. Integration vs. Front-End: The Testing Face-Off

Unit vs. Integration vs. Front-End: The Testing Face-Off

Mirko Peters - M365 Specialist