The A Team

Contd from the previous post..

So how do we succeed?
Lock up Enemy #1 - Accidental Complexity

Empower teams to choose

All projects / teams are not the same. Different needs, different tools. This may be difficult in BigCo especially if the tools have already been bought. But make some noise - stand up for your tools, the "users" need to channel some feedback upstream to the "purchasers".
  • Explore options instead of resigning to the golden hammer. Prefer tools that don't get in your way. Ones that don't require you to learn yet another proprietary language. Ability to write extensions in your team's native language is a +. This also opens avenues for developers to assist with automation work, if required.
  • Use existing tools instead of writing your own - they're likely to be functional, tried and tested
  • Avoid putting all eggs in one basket. Keep tools/test-runners swappable by defining layers. To migrate to a different test runner, you should only need to migrate your thin tests layer which calls into an automation layer below it, which does most of the heavy lifting. More on this soon..
Collaboration
forms a reinforcing loop against Silos/Specialists. Increasing collaboration decreases opportunities for specialization, which in turn facilitates more collaboration. Of course, the reverse is also true - Silos can finish off collaboration. Only one shall survive, you just have to choose.

Outside-in / Test-first + Wishful Thinking
If you've tried the XP Practice of TDD, you'd know the liberating feeling of letting the unit tests drive the design of the production code. You fix the end-goal, you make it work, make it clean and repeat.
Starting with the test prevents any bias (arising from implementation details, existing tools at your disposal, etc.)

ATDD is the corresponding practice at the system level. However it is not an easy practice to quickly latch on to. So work towards it in baby steps
For starters, concentrate on a DONE definition + writing tests first (before implementation) from the users' perspective on a piece of paper. Make sure everyone has the same idea of DONE before you start the iteration.
As the team matures, you can even move up to ATDP (from the BDD world) where you write test before or during iteration planning & use them for estimation.


WHAT over HOW
Ensures that the test is at the right level of abstraction (the ol' forest over trees adage). It makes the tests shorter and readable. It also works beautifully to bring out the intent (as opposed to the implementation) of the test.
Specify the bare minimum ; things that are relevant to the test at hand.. all other details need to be out of sight.

Stable DSL for testing
You employ wishful thinking to imagine the ideal interface you'd like the system to expose for testing. Since the tests are another client to the system, they can also drive beneficial design changes. The tests stand-in for real users, so if the system is difficult to consume for the tests it follows that it might be for the users too. You could start with a plain C# interface to begin with and then work your way up to a custom DSL. It
  • abstracts away incidental details like the GUI, underlying technology and other implementation details.
  • abstracts away the tools used for automation from the tests.
  • decouples the people interested in writing tests from the automation personnel. This allows both to play to their strengths and offers the best of both worlds. e.g. the testers could define the automation interface changes for the sprint and the developers could implement them with production-code like quality.
  • makes it easy to write new tests with relatively little boot-up time. Writing a test then is just combining the reusable building blocks offered by the test DSL. The tests layer is a good training ground for new hires.
Imagine (wishful) a robot that will operate the system for you and think of the commands that you'd issue to the robot. The set of commands are your starting point.
e.g. robot.DoX(params) or robot.GetY()

Programming Skills
Automation is programming. Without good programming techniques and discipline, sustainable pace would be difficult.
This means you need to raise the bar for automation personnel and/or leverage devs. If the team lacks the skill-set required, take countermeasures... Training, get some experts onboard, etc. The average skill level of the team can also be increased by frequent pairing.


Refactoring
Your #1 weapon against complexity. Beck's 4 rules for simple design, the techniques from the Refactoring book (Martin Fowler) + the SOLID principles are a must-read. Top that off with an introductory text on programming (e.g. Clean Code - Robert Martin) and you should be good to go.

Good Naming & Discoverable Design
Taking the time to pick good names goes a long long way. Good names make it easy to find things, facilitate understanding, help zone in on a specific area to change & reduce duplication
This also helps in being able to discover the design / API using just the IDE (learn by intellisense) and programmer intuition. Choose names that are likely to be searched. Operate by the principle of least surprise (code that works as expected the first time around); Avoid hidden side-effects. Document and use team conventions to preserve consistency.

Communicate Intent / Distill the essence
This takes WhatOverHow to the next level. Explaining Why e.g. by extracting another "coarse" method to move up one level OR differentiating sets of inputs by using explanatory names. This reduces the test further to the essence - where the tests turn into readable system documentation... the kind that never gets out of date.

Learning Curve
Refactoring well and often keeps accidental complexity down to manageable levels. The supporting cast of Pairing, a discoverable design, intention-revealing code and a good testing DSL make it easy for new team members to learn the ropes.
This inhibits cargo-cult behavior and the changes made are deliberate/intentional rather than hopeful. Another source of complexity wanes.

Test Maintenance - the last frontier
Test Maintenance like complexity can be minimized not eliminated. As complexity decreases, maintenance effort reduces too.
The testDSL makes it possible to write-and-test the blocks once & use anywhere. Simple designs (no duplication, intention-revealing code, minimal classes) make maintenance much easier.

Transitively, the cost of automation goes down as well.

Let's refactor our diagram to remove the accidental nodes and edges and things get clearer now. Refactoring code is even more rewarding.

4 comments:

  1. Gishu, your posts on automation bring together so many things that my teams and I have learned over the years - often learning the hard way, by making mistakes and having to go back and start over or refactor. This is such an important resource, thank you!

    ReplyDelete
  2. Very Good. Thank you to take your time to share this.

    ReplyDelete