An effective test strategy to grow applications

This is a post to summarize my understanding (as of today) of an effective (low ceremony) way to build applications.
Let's take a look at (a slightly modified version) of Mike Cohn's layered test pyramid.



The choice of a pyramid indicates the robustness/stability of a lower layer directly affects the effectiveness of the upper layer. Also the number of tests decreases as you move up. As the agile testing book says ROI is maximum at the bottom (speed of feedback over time invested) and wanes towards the top.
e.g. without robust unit-tests, DSL tests or GUI tests would catch a bunch of errors without the essential feedback needed to fix it quickly. More and more errors would make it up to the middle and top layers, where it is more time-consuming/expensive to find-n-fix. Nothing replaces well-written, quick, professional unit tests.
To prevent ambiguity and misinterpretation, let's go over each layer.


Layer 0: Unit-Integration-Stress tests
  • Unit tests should accumulate over time as a side-effect of test driven development. It's a tough art to master -- writing good unit tests but they are invaluable in the long term. They are essential for 'internal quality'. This layer also contains some integration tests and stress tests.
  • 'Integration tests' in this post means (as in the GOOS Book) : 'Let me see if the real collaborator behaves as I expect it to'.
e.g. Let's say Car needs to use the Driver database. Since we live in these times, we'd define a Role(Interface) DriverRepository to abstract away the database from ClassA. In my unit-tests, I'd inject a MockDriverRepository into a Car and test it out. So I know Car is good. Now there are some assumptions implicit in the Role/interface. I need to verify that the real implementation behaves the same - so I'd need to write some 'integration tests' to verify if the real collaborator DriverRepository behaves the same. These tests would round-trip a third party / external subsystem in order to produce a result. TestDriverRepository.GetsSpecifiedDriverInformation() would test if we can get the specified driver information out from a real database. Integration tests are *slower* than unit-tests, so we partition it into a different test-suite that is not run as frequently as the unit-tests. Feedback needs to be as short as possible during development.

  • 'Stress tests' are for asynchronous code - code that has multiple threads running through them. In this case, you'd have to define some invariants - things that must remain true irrespective of the number and scheduling of multiple threads through the code. (Refer to the last couple of chapters of the GOOS book for details).


Layer 1: Under the GUI / DSL-API tests
This is the layer that's rarely given its due and usually missed - its primary reason for existence is scenario testing just under the skin/GUI. These tests are written from the user's perspective. When you get this layer right, its sweet. You will see a DSL emerging. Your tests appear as short scripts written in an application specific DSL, where the steps are things that an actual user would do or perceive. These tests can exercise the whole app similar to a real user, without having to bring up a GUI... as long as you have designed it for testability (use a MVP / MVVM pattern).
These tests involve all real collaborators (no mocks/stubs/fakes). As a result, they are slower than Unit tests. However built on a bedrock of solid unit tests, they can take you to the finish line... almost.

Layer 2: GUI Tests
Even with anemic views, there is a matter of the 'wiring'. Is the GUI Control wired correctly to the right property/action on the backing object? Irrespective of the strength of the layers beneath it, you need a set of GUI tests that go really end-to-end identical to a real user. They're the slowest we've seen so far - so do not test everything through the GUI. The test duration would grow prohibitively large with time.
An interesting idea here is: to just test the code in the GUI in the GUI Tests. I haven't tried this out myself but sounds like something that is just crazy enough to work.

We're almost at the end. There are some aspects that we haven't yet tested.
  • Exploratory tests: Let some good testers go ad-hoc at your app trying to break things. These tests are manual since they rely on the tester's creativity and insight (not mechanical).
  • Usability tests: Regular demos and getting some real users to use the latest builds would go a long way to ensuring usability. Testers can act as a proxy as long as they are aligned with the customer's profile and needs. These are manual too since machines can't determine usability.
  • "ility" tests: These are tests that deal with performance, load, security, reliability, scalability, etc. When a feature is picked up for implementation, if it has any of the above 'ility' needs, they should be noted and a corresponding test be written up to validate them. (The point being.. don't wait till the end to start with this.) The tests can be written in a scripting language and use specialized tools/libraries if required.
Who does what ??



An effective way to implement features is a Top-Down approach, where you begin by writing a GUI Test/DSL-API test first and then work your way down. This way you only write code that a client needs - which leads to simple usable APIs. Once you have a failing red acceptance test,
  • You can either figure out the top-level design to get it to work and test-drive the classes required. Combine them to make the accceptance test pass.
OR
  • you can begin by writing the code to make the acceptance test pass. Once green, in the refactoring step, you can now figure out the right "house" for the code. TDD the objects that you now know you need and fold them back into the running application (See the GOOS book if you're interested in this approach).

That's it. This ties up nicely with Brian Marick's test quadrants idea.


No comments:

Post a Comment