The A Team

Contd from the previous post..

So how do we succeed?
Lock up Enemy #1 - Accidental Complexity

Empower teams to choose

All projects / teams are not the same. Different needs, different tools. This may be difficult in BigCo especially if the tools have already been bought. But make some noise - stand up for your tools, the "users" need to channel some feedback upstream to the "purchasers".
  • Explore options instead of resigning to the golden hammer. Prefer tools that don't get in your way. Ones that don't require you to learn yet another proprietary language. Ability to write extensions in your team's native language is a +. This also opens avenues for developers to assist with automation work, if required.
  • Use existing tools instead of writing your own - they're likely to be functional, tried and tested
  • Avoid putting all eggs in one basket. Keep tools/test-runners swappable by defining layers. To migrate to a different test runner, you should only need to migrate your thin tests layer which calls into an automation layer below it, which does most of the heavy lifting. More on this soon..
Collaboration
forms a reinforcing loop against Silos/Specialists. Increasing collaboration decreases opportunities for specialization, which in turn facilitates more collaboration. Of course, the reverse is also true - Silos can finish off collaboration. Only one shall survive, you just have to choose.

Outside-in / Test-first + Wishful Thinking
If you've tried the XP Practice of TDD, you'd know the liberating feeling of letting the unit tests drive the design of the production code. You fix the end-goal, you make it work, make it clean and repeat.
Starting with the test prevents any bias (arising from implementation details, existing tools at your disposal, etc.)

ATDD is the corresponding practice at the system level. However it is not an easy practice to quickly latch on to. So work towards it in baby steps
For starters, concentrate on a DONE definition + writing tests first (before implementation) from the users' perspective on a piece of paper. Make sure everyone has the same idea of DONE before you start the iteration.
As the team matures, you can even move up to ATDP (from the BDD world) where you write test before or during iteration planning & use them for estimation.


WHAT over HOW
Ensures that the test is at the right level of abstraction (the ol' forest over trees adage). It makes the tests shorter and readable. It also works beautifully to bring out the intent (as opposed to the implementation) of the test.
Specify the bare minimum ; things that are relevant to the test at hand.. all other details need to be out of sight.

Stable DSL for testing
You employ wishful thinking to imagine the ideal interface you'd like the system to expose for testing. Since the tests are another client to the system, they can also drive beneficial design changes. The tests stand-in for real users, so if the system is difficult to consume for the tests it follows that it might be for the users too. You could start with a plain C# interface to begin with and then work your way up to a custom DSL. It
  • abstracts away incidental details like the GUI, underlying technology and other implementation details.
  • abstracts away the tools used for automation from the tests.
  • decouples the people interested in writing tests from the automation personnel. This allows both to play to their strengths and offers the best of both worlds. e.g. the testers could define the automation interface changes for the sprint and the developers could implement them with production-code like quality.
  • makes it easy to write new tests with relatively little boot-up time. Writing a test then is just combining the reusable building blocks offered by the test DSL. The tests layer is a good training ground for new hires.
Imagine (wishful) a robot that will operate the system for you and think of the commands that you'd issue to the robot. The set of commands are your starting point.
e.g. robot.DoX(params) or robot.GetY()

Programming Skills
Automation is programming. Without good programming techniques and discipline, sustainable pace would be difficult.
This means you need to raise the bar for automation personnel and/or leverage devs. If the team lacks the skill-set required, take countermeasures... Training, get some experts onboard, etc. The average skill level of the team can also be increased by frequent pairing.


Refactoring
Your #1 weapon against complexity. Beck's 4 rules for simple design, the techniques from the Refactoring book (Martin Fowler) + the SOLID principles are a must-read. Top that off with an introductory text on programming (e.g. Clean Code - Robert Martin) and you should be good to go.

Good Naming & Discoverable Design
Taking the time to pick good names goes a long long way. Good names make it easy to find things, facilitate understanding, help zone in on a specific area to change & reduce duplication
This also helps in being able to discover the design / API using just the IDE (learn by intellisense) and programmer intuition. Choose names that are likely to be searched. Operate by the principle of least surprise (code that works as expected the first time around); Avoid hidden side-effects. Document and use team conventions to preserve consistency.

Communicate Intent / Distill the essence
This takes WhatOverHow to the next level. Explaining Why e.g. by extracting another "coarse" method to move up one level OR differentiating sets of inputs by using explanatory names. This reduces the test further to the essence - where the tests turn into readable system documentation... the kind that never gets out of date.

Learning Curve
Refactoring well and often keeps accidental complexity down to manageable levels. The supporting cast of Pairing, a discoverable design, intention-revealing code and a good testing DSL make it easy for new team members to learn the ropes.
This inhibits cargo-cult behavior and the changes made are deliberate/intentional rather than hopeful. Another source of complexity wanes.

Test Maintenance - the last frontier
Test Maintenance like complexity can be minimized not eliminated. As complexity decreases, maintenance effort reduces too.
The testDSL makes it possible to write-and-test the blocks once & use anywhere. Simple designs (no duplication, intention-revealing code, minimal classes) make maintenance much easier.

Transitively, the cost of automation goes down as well.

Let's refactor our diagram to remove the accidental nodes and edges and things get clearer now. Refactoring code is even more rewarding.

Towards better acceptance test automation...

This started out as a sketch of a Causal Loop Diagram (CLD) for bad acceptance test suites... and then it got away from me :) The black arrows indicate "+ve links" e.g. Duplication and Complexity increase/decrease together. The blue arrows ending in dots indicate "-ve links" e.g. Refactoring and Duplication work against each other. Increase in Refactoring causes decrease in duplication.
Click on it to be able to zoom in.



Automated tests != Free Lunch

Disclaimer: I'm a proponent of XP and truly believe it has made me a much better programmer. This post just aims to let readers see through the fog-of-agile caused due to data-transfer loss as it passes from person to person. Please do not misinterpret this as an anti-agile rant. I'm just saying it doesn't work out always unless you're willing to put in the effort to make the change.

Legend
What you hear… (good) the Promised Land
  • What was left unsaid (bad.. or downright ugly)

You have an automated regression-safety net to make future changes. Make a change, Push a button and you will know if you broke anything. CI can provide near instant-feedback. Better confidence.
  • You have 2-3X the code to maintain. If you have people who don’t care/are too busy/ not passionate about code quality and writing good tests, the tests are the first to put a chokehold on your productivity. Bad tests are as good as (or possibly worse than) having no tests. You could see yourself in a situation where a section of your team is permanently siphoned to keeping the build/tests green. This turns into a daily bottleneck. Tests need to be self-checking, thorough, readable, professionally written, independent, repeatable, concise and FAST. All this takes effort!

Documentation - the tests can be "live specs" of the application - They never get out of date like documentation.
  • It takes a significant level of discipline and skill to write readable spec-tests. An essential skill :to see the What and Why without getting entangled in the How. Most teams get this wrong... without noticing it.
  • Sidenote: The practice of ignoring failing tests is criminal (but usually not punished accordingly) and can lead to misleading specs.


Quality & Productivity: Leads to high-quality production code. Fewer Bugs. More features added / unit time (because you spend less time in debugging and manual testing)

  • IF you let the tests drive/shape your design (ATDD and TDD). Client-first design is an unstated requirement.
  • The quality of the code is usually a direct reflection of the people writing it. This means you need craftsmen (> 30-50% of a team) and NOT armies of cargo-cult programmers.
  • If you're using automated tests exclusively for regression (or getting your 'agile badge'), you'll slowly grind to a halt. Writing tests for "untestable blobs implemented from a non-negotiable handed-down paper design" is frustrating. People can be stuck on “how do I test this?” – Usually leads to silent trade-offs & non-thorough test which will let in bugs and put you in the net negative w.r.t. productivity.


Less rework/thrashing: The dialogue / conversation (that you have with the customer to come up with the acceptance tests) makes it highly likely that you’re building the right thing..
  • Assumes that the customers want to collaborate with the developers and testers. This is not often true.. Real users are sometimes real hard-to-find. Even if you manage to snag one of them, you can only procure a small slice of their time. Real users rarely want to write tests.

  • If the customers give a “vision” and delegate the responsibility of mapping them to executable specs to the technical team (or worse the QA/testers), you still run the risk of “This is not what I asked for” late in the game. Regular demos may help shorten the feedback time.. but you may still waste an iteration.The magic potion here is collaboration and conversation.. the tests are just a beneficial byproduct.


Simple: Red-Green-Refactor. How hard can that be?
  • Sounds simple.. but is deceptive. True OO is a minority sport. Refactoring is a skill that you need to work on. Off the job practice is mandatory.
    You may need to "hire" a good coach for an extended period (I'd say 6 months-1 release) to get the team rolling. Spot trainings/Just-in-time learning won't work for most teams.