Extend MSTest : TestCaseSource aka runtime inputs for test cases


Extending MSTest


Read this great post first by William Kempf. And It's true!

 http://www.digitaltapestry.net/blog/extending-mstest

 Next if you're willing to bear with all that, these are the only 2 posts from circa-2009 on the intertubes that give you any hope

MSDN Blogs - Bruce Taimana Part 1. Part 2
 

Writing an extension in 2013

The details have changed since 2009 and combined with the paucity of information, it was three days before I had something to show.

So my target extension was going to bring in NUnit's TestCaseSource functionality, whereby you could supply the parameters for a parameterized test via a method (at runtime. Compile-time is already supported via XML and DBs.. though cumbersome.)
[TestMethod] 
[TestCaseSource("DivideCases")]
public void MultipleParams(int n, int d, int q)
{
   Assert.AreEqual( q, n / d );
}

public static object[] DivideCases()
{
   return new[] 
   {
       new object[] {12, 3, 4},
       new object[] {12, 2, 6},
       new object[] {12, 4, 3}
   };
}

Sweeping the IDisposable minefield

IDisposable has been around since the beginning of .Net.
The basic premise is simple..

Developers dont need to manage memory now. The 'Garbage Collector' takes care of reclaiming memory for you.. However the GC is non-deterministic.. you can't predict when it will embark on a 'collection' of unused objects. So far so good..
However there are cases when you want deterministic cleanup, e.g. if DB connections are a premium, you want to reclaim them as soon as possible. Enter IDisposable.. Simple concept. Introduce a public Dispose method that you can call when you deem fit.

Creating types


Summary:

  1. All managed objects do NOT need to implement IDisposable. Just let the GC do its thing. You do NOT need to implement IDisposable if
    1. does NOT directly own any unmanaged resources (e.g. native handles, memory, pipes, etc..) 
    2. does NOT directly own any managed members that implement IDisposable
    3. does NOT have special cleanup needs that must run on-demand/asap e.g. unsubscribe from notifiers, close DB Connections, etc.
  2. If you need to implement IDisposable BUT do not own any (direct) unmanaged resources, You should NOT throw in a free finalizer.
    1. Consider if you can make your type sealed. This simplifies the implementation a great deal.
    2. If you must have subtypes, then implement the version with the virtual Dispose(bool) overload as detailed below. Again, rethink if you can seal the type.
  3. If you directly own unmanaged resources that need cleanup, check for a managed wrapper type that you can use. e.g. a SafeFileHandle. If there is, use it and fall back to 2. Still no finalizer
  4. If you reach here, you need a finalizer. Finalizer pulls in IDisposable. Ensure that you have a deterministic Dispose implementation, that makes the finalizer redundant and avoids the associated performance penalties. Log an error in the finalizer to call your attention to cases where the clients have forgotten to call Dispose. Fix them.
    1. Consider creating a managed wrapper type because finalizers are hard to get right. e.g. SafeMyNativeTypeWrapper. Deriving from SafeHandle is not recommended - better left to experts
    2. Use GC.AddMemoryPressure and its counterpart to 'help' the GC if you are allocating significant amounts of native memory. Similarly manage handles via the HandleCollector class (e.g. GDI handles). See this post for details except I'd move the Remove.. calls into Dispose instead of the finalizer.

Programming against types that implement IDisposable

  1. Limit the scope of disposable types to within a method. Wrap them within a using block to ensure that Dispose is called (reliably) when control leaves the using block.
  2. If you need to hold on to a disposable type i.e. as a member field, you need to implement IDisposable on the container type. e.g. If A owns B owns C owns D, where D implements IDisposable, then A,B and C need to implement IDisposable as well.
  3. Do not dispose objects that you don't own. e.g. if you obtain a reference to an object from a container (e.g. a MEF container or a Form's controls collection) OR a static/global accessor  you don't own the object. Hence you shouldn't call dispose and break other clients with ObjectDisposedException. Leave it to the container to Dispose it.

The long-winded version (with code snippets)

Para1: Avoid implementing IDisposable unless necessary, most objects don't need it.


If your type doesn't need IDispose. You can stop reading here.


Para2: If you need deterministic cleanup, implement IDisposable (mini).

  •  All public members need to check for _isDisposed == true & throw an ObjectDisposedException
  • Dispose can be called multiple times : once again use _isDisposed and ignore all calls except the first one
  • Dispose should not throw exceptions
  • Call Dispose for disposable managed objects that this type owns. Corollary: Dispose will creep all the way up the object graph. e.g. If TypeA contains B contains C contains D and D implements IDisposable: A,B,C need to implement IDisposable.
  • (Managed Memory Leaks from Events) - Unsubscribe from all events that this object has active subscriptions to. Long-lived Publishers can keep short-lived subscribers alive and prevent them from being collected. Try: clearing subscribers from your own events might be a good idea- set event/delegate to null. Try: using the WeakEventManager/WeakReference type
  • Seal your type - inheritance needs the full blown Dispose pattern (later on in this post).


sealed class MyType : IDisposable
    {
        // other code

        private bool _isDisposed;

        public void SomeMethod()
        {
            if (_isDisposed)
                throw new ObjectDisposedException();
            // proceed..
        }

        public void Dispose()
        {
            if (_isDisposed)
                return;

            // cleanup 

            _isDisposed = true;
        }
    }



Para3: Avoid finalizers unless necessary.

When are finalizers necessary ?
  • When you have directly owned Unmanaged resources that need to be cleaned up AND there isn't a managed wrapper type that has the finalization routine nailed down e.g. a SafeHandle derivation. If you can, you go back to the previous section.
Finalizers
  • slow down the collection process
  • prolong object lifetime - the object moves into the next generation (whose collection is even less frequent. C# in a Nutshell gives a ratio of Gen 0 10:1  Gen 1)
  • are difficult to get right

Finalizers should
  • not block / throw exceptions
  • must execute quickly
  • not reference other finalizable members (their finalizers may have already run)
  • should log / raise a red flag to indicate Dispose was not called.
sealed class HasUnmanaged : IDisposable
    {
        public void Dispose()
        {
            Dispose(true);
            // prevent object from being promoted to next Gen/finalizer call
            GC.SuppressFinalize(this);
        }
        ~HasUnmanaged()
        {
            LogSomeoneForgotToCallDispose();
            Dispose(false);
        }

        private bool _isDisposed;
        private void Dispose(bool isDisposing)
        {
            if (_isDisposed)
                return;

            if (isDisposing)
            {
                // dispose of managed resources(can access managed members)
            }
            // release unmanaged resources
            _isDisposed = true;
        }
    }


Para4: Subclassing a Disposable type

If your type cannot be sealed, then it's time to bring out the big guns. Implement the base type as follows

class BaseType : IDisposable
    {
        Intptr _unmanagedRes = ... // unmanaged resource
        SafeHandle _managedRes = ... // managed resource
   
        public void Dispose()
        {
            Dispose(true);
            GC.SuppressFinalize(this);
        }
       
        ~BaseType()
        {
            Dispose(false);
        }
   
        private bool _isDisposed;
        virtual void Dispose(bool isDisposing)
        {
            if (_isDisposed)
                return;

            if (isDisposing)
            {
                // managed resources dispose
                _managedRes.Dispose();
            }
            // unmanaged resource cleanup
            Cleanup(_unmanagedRes);
            // null out big fields if any
            _unmanagedRes = null;

            _isDisposed = true;
        }
    }
  • If the derived type has it's own resources that need cleanup, it overrides the virtual member like this
  • If the derived type does not have resources that need cleanup, just inherit from the base. You're done. 
class MessyDerived : BaseType
    {
        // ... Derived members

        private bool _isDisposed;
        override void Dispose(bool isDisposing)
        {
            try
            {
                if (!_isDisposed)
                {
                    if (isDisposing)
                    {
                        // derived managed resources
                    }
                    // derived unmanaged resources
                    _isDisposed = true;
                }
            }
            finally
            {
                base.Dispose(isDisposing);
            }
        }
    }


    class SimpleDerived : BaseType
    {
        // ... Derived members
    }

Of course, there will be edge-cases. But for the most part, this should save you a lot of grief
See also - Joe Duffy's post

Book Review: Writing Solid Code by Steve Maguire


4 out of 5 stars.  

This is an MS(!) Press book written by Steve Maguire in the early 90s. Why am I reading it now? Jim Weirich (of Rake fame) recommended this book in an online video that I happened to watch. 
The book was targeted at teams working in C ( distilled from the authors stint at Microsoft )... but the advice is pretty relevant even today if you can breeze through the code snippets. It seems to be out of print but you might luck out and find an old copy like I did

  • Lean on the compiler.. turn on all the warnings by default. Disabling should be the exception not the rule.
  • Use SCA tools from Day1. Fix issues regularly instead of letting them pile up.
  • If you have unit tests, use them.
  • Maintain a fortified debug version of your product with 'dev/debug mode asserts'. Conditionally compiled so that release version is lean.
  • Use asserts to identify 'things that should not happen' as early as possible. (Of course tests are much better :)
  • Reviewing the code (personally at least) before committing changes is the easiest and cheapest way to reduce bugs.
Design
  • Don't create 'candy machine interfaces' - make it hard for clients to make mistakes.
  • Eliminate 'undefined' behavior.. so that clients do not depend on it
  • Don't bury error codes in return values - make them hard to ignore
  • A function should do only ONE thing and do it well - (actually I rephrased it. this line's from Clean Code)
  • Don't use flag arguments.
  • Do not tradeoff client code readability over ease of implementation.. make code legible at point of call.
  • Avoid sharing/Passing around global data
  • Don't tradeoff global or algorithmic efficiency for local efficiency
  • If you have to look it up, it isn't obvious. Make it boring and obvious.
  • Eliminate as many if branches as possible.
  • Write code for the "average" programmer. Simple over clever.
Attitude (IMO The best section of the book)
  • Bugs just don't go away.. track them down.
  • Fix bugs now.. not later
  • Don't meddle with legacy code if you don't need to
  • Don't add features if you don't need to. All flexibility has a cost. (maintenance, testing, learning curve, etc.)
  • Don't keep trying solutions till one works. Take the time to find the correct one. Don't TRY.. READ.
  • Don't rely on testers to find your bugs. Don't shoot the messenger when they do find your bugs
  • Never allow the same bug to bite you twice. Fix your process to stonewall that type of bug.
The book has aged quite well over 20 years. Combined with on topic anecdotes (multicoloured screwdrivers is a keeper), this one is a good book to casually read while you wait for something to complete/load.


Note: The highlighted sections indicate Steve was agile before it became cool/a buzzword.

Lean Startup Book Review

Rated 3/5



The book is written in lucid manner with lots of anecdotes thrown in to illustrate the point being made. That said I'd have liked it to be concise (the essence could easily be extracted to a couple of blog posts)


I did a small experiment. Extracted out things that I felt were important take-aways and tried to see if it is something that I didn't know before (highlighted Blue).


  • What over How. Deciding what to build is the crucial and more difficult problem (tip of the hat to Mr.Brooks)
  • Validate your value hypothesis and your growth hypothesis. (Fail fast, regular demos, real customer, et. all. The growth angle was new for me)
  • Cohort Analysis, Split testing
  • Build Minimum Viable Product ASAP, Establish baseline, Tune, Validate.
  • Metrics must be Actionable (demonstrate clear cause and effect), Accessible (simple, unambiguous, unrestricted), Auditable (verifiable - ability to test data on demand against reality)
  • Reduce batch size , Reduce WIP, Pull model (Lean)
  • Growth engines (the best part of the book IMHO)
    • Sustainable growth = new customers result from the actions of past customers.
    • Sticky: engaging customers for long-term. KeyMetric: Churn rate = Rate of acquiring new customers - attrition rate
    • Viral: Not word of mouth. Referral a natural outcome of using the product. Revenue is usually indirect. KeyMetric: Viral co-efficient. How many referrals does a single new customer beget? > 1 indicates growth.
    • Paid : Revenue invested to acquire new customers. Keymetric: Revenue over lifetime of customer - Cost per customer Aquisition
  • It's usually a human problem. Ask 5 whys - answers usually go from technical to human issues. (Gerald Weinberg)
  • Retrospectives: 
    • Be tolerant of all mistakes
    • Never allow the same mistake to be made again
    • Shame on us for making it so easy to make a mistake.



VS2010 and Resharper 5.1 cheatsheet


You can grab the VS2010 settings file here . Import them via VS2010 Main Menu > Tools > Import and Export Settings... and follow the wizard.


Here's a cheat sheet of the shortcuts that I love.
C => Control, M => Alt

GUI Testing rehab : Can we start saying NO?

Testing GUIs has been hard, tedious, painful... just bad. But they have been an occupational hazard due to lack of feasible alternatives..

There's a hard-earned confidence you get when you see a dancing UI twisting, turning... testing itself. And vendors smelt that from miles away.. and then they homed in with tools. Over-simplified demos were given, Influential people were influenced, buying decisions made, tools thrust on unsuspecting people... "The horror... the horror..."

But I digress. GUI tests were problematic because
  • Flaky: Running tests would just fail without reason one fine day. Reissuing a test run would pass that test (the flashing lights tests anti-pattern) but could fail cause a different intermittent test failure. Trust goes down.. tests get commented.. a dangerous path to tread.
  • Fragile: Vulnerable to UI/UX changes - Some test broke because someone turned a textbox into a combobox.. or worse someone just redesigned a commonly used dialog. Time to throw someone into "the hole" again.. record-n-replay/Fix all those broken tests.
  • Time to develop/write: Writing UI tests = tedium. Getting them to "stabilize" takes a while. But 'we use a record-n-replay tool!'.. put a pin on that one.
  • Time to execute: Don't hold your breath, these tests take a while. Waiting for windows to pop up, scraping info out of controls, etc.
  • Quirky controls: There are always some automation party poopers. Third party controls that don't exhibit standard behavior / the tool simply refuses to "see" them. But the UI is already "done".. Time to call in some specialists..
  • Vendor lock-in and Specialists: Our resident expert has vanished without a trace.. who can write/fix the tests? (Shrugs all around) Instant Emergency: "We surely can't swap tools now. How quickly can we hire someone who speaks ToolX ?"
  • Misc dept: Handle possible failure error dialogs so that the test doesn't block or wreck the subsequent tests, test sensitive to type of OS, theme, screen resolution, etc.

"Enough!" you say. Is there any hope in this post at all?

Let's tackle them one at a time.
Fragility / UX sensitivity
What if we could extract named actions ( a set of building blocks ) that we could then use to build up our tests. Think Lego blocks (named actions) combining to become models (tests) limited only by your imagination and time.

e.g. Let's say I want to test if my (unnamed) mail client

test CanReceiveEmails
     testMailServer.Setup(DummyEmails).For(username,password) 
     mailClient.Start()
     mailClient.AuthorizeOfflineMailStoreAccess(datafile_password)
     mailClient.LoginToMailServerAs(username, password)
     mailClient.SendAndReceiveAllFolders()
     var actualEmails = mailClient.GetUnreadEmails()
     Assert.That(actualEmails.Count).Is.EqualTo(DummyEmails.Count)
     // more comprehensive checks for message content...
     mailClient.Stop()


So there, we have identified the actors (I'll call them as Drivers henceforth) in our test and the corresponding keywords/actions that we need them to offer. How did that help us - you ask?

We have removed any traces of the UI out of the test. So let's say the LoginToMailServerAs changes from a modal application window to an inline standard widget provided by the specific mail server implementation. All I need to go fix now is the implementation of the LoginToMailServerAs action and all my tests should stay unchanged.
Also now everyone can just invoke LoginToMailServerAs as a magic incantation without worrying about how it works...it just does!

Separate intent (WHAT you want to do) from implementation (HOW you're doing it): Compare that to a run-of-the-mill UI test, the above test is much much more readable. Easier to read, understand, fix/maintain.

Time To Write - it still takes time but decreases as the store of named actions grows. Every keyword/action needs to be implemented once.. write once use wherever you need it.

We've lowered the technical expertise needed to write a test. Given the "drivers":cohesive clumps of named actions, requisite tooling and a brief walkthrough of the existing "drivers", someone can discover the APIs to choreograph a specific script - a test. Focus on testing/thinking rather than automation/coding.
Vendor lock-in and Specialists
  • The decline of the specialists : "That looks almost like a xUnit style test!" You're observant. Yes you could leverage whatever it is your developers are using for unit tests - this means anyone can now write a test. No more dependency on specialists, learning curves for mastering a proprietary tool, No magic-tool licenses to buy. More money to distribute among the team (that last part is still fiction.. but I'd bet you'll have a really motivated team the day after :)
  • Encapsulate Tools : The tool is bounded by the box exposing the keywords. No one outside this box (the driver) knows that you're using White for instance. This makes the tool replaceable and the choice of tool a reversible decision.

But how do we implement the HOW i.e. the keywords? The Drivers themselves.

You could use an open-source library like White (or equivalent) that can launch/attach to a running instance of a GUI app, find windows/controls and poke them. (Anything that helps you implement the ControlFinder role shown later)


Flakiness 
Depends on your choice of UI controls and your Automation library. e.g. with C#/WPF applications that run on Windows, I've found White to be pretty robust. < 5% chance of White playing truant.

Enter VM/PM Tests
"That's it ??!! These are still UI Tests! What about writing all that nasty UI Automation code?" White has wrapped the nastiness within a bunch of control-wrapper types. (You could add your own too). However for special controls, you'd still need to get your hands dirty.

"But these tests still crawl!"

Beyond this, the target application has to be (re-?) structured or as the self-righteous phrase goes 'designed for testability'. Here's one idea that should work...

All of the remaining issues are due to the GUI. There are so many types of UI controls to automate. Waiting for windows and finding controls in large hierarchies takes time. What if I slice the UI out ?
e.g. let's consider the Login.. named action (which involves bringing up the login dialog, entering the username, password and clicking OK)

If we design it such that the UI is thin (devoid of any code/logic) and merely "binds"/maps to fields and methods in a backing class. This means updating the control will trigger the backing field to update and vice-versa. Doing an action like clicking a button will trigger the underlying method.



This technique has been known for some time now (Presentation Model (MVP) - Martin Fowler (2003) OR Microsoft's variation called MVVM, which leverages .Net's in-built data-binding feature to make WPF apps faster to develop).

The only thing that the UI contains is the layout, the controls and the wiring to the underlying class. (Even that can be automatic if you move into the realm of advanced MVVM - look for Rob Eisenberg's MIX talk which uses convention to auto-bind). The more important thing is that most of the code (and bugs as a corollary) has moved into a testable class - the ViewModel / PresentationModel. The whole app is basically a symphony orchestrated by multiple presenters.

So instead of fidgeting with the UI, I can now just assign desired values to corresponding properties and invoke the OK method to simulate the whole Login process. Much better - plain method calls. What if I can load the whole app from the ViewModel layer down in my test process? That'd be great.

Benefits:
  • Time to develop: No need to write UI automation code. Just call existing methods and set properties that the developers have (already) created as part of the implementation. Quick, simple and easy.
  • Time to execute: No more flashing windows, looking for controls and manipulating them. If you're able to load the whole app sans the UI layer within your test process, you are effectively creating a bunch of objects, toying with them and then letting the garbage collector clean them up. It's way slower than a unit test (because you're using all the real services, data stores, devices etc..) but would be faster than a traditional GUI test (Presentation Intensive tests will show a bigger gap as compared to something that spends most of its time talking to a slow hardware device. YMMV)
  • Quirky anti-automation Controls: Buh-Bye! Instead of grappling with a third party tree/grid that doesn't want to be found, you can just reach into the VM/PM layer & grab a friendly in-memory collection (that it binds to) within the corresponding ViewModel/Presenter.

But wait it gets better..
  • Decoupled the testers from the implementation : This means as long as you give them some key information, the testers can start writing the tests

    
    public interface ControlLocator
    {
        Window GetWindow(string title);
        T GetControl<T>(Window parentWindow, string automationId) where T : Control;
    }

    public interface ViewModelLocator
    {
        T GetViewModel<T>() where T : ViewModel;
    }
What testers need to know to implement the Drivers.
  1. for UI Tests: ParentWindow + ControlType + unique ControlId 
  2. VM Tests: ViewModelType + PropertyName/CommandName
  • Enable Test-first: they don't have to wait till the whole thing is implemented to write the tests. e.g. With record-and-replay style of tools, you'd have to wait till the development team gives you the running application to begin test automation. Especially important for teams practicing one of the Agile methods. You could now enable the teams to move up to ATDD.
So let's do a recap, 

Current: We started at the top where teams have heavy investments in GUI Testing. These tests are work magnets : maintenance-heavy... sucking in team resources... high cost to benefit as compared to the PM/VM Tests.

Target: By identifying reusable actions with intention-revealing names, we can construct tests much faster than before and with less cost (Programmers will recognize this as the Extract Method refactoring).
Further by peeling off the UI layer, we get a scriptable interface (an API so to speak) for the target application. We can write most of the system-level tests without the UI.. most teams still like to write some UI tests just as backup.

Stretch: Finally IF a team resolves to write comprehensive unit tests (such that most bugs don't make it past the green section), uses VM tests to catch integration defects and makes every defect an opportunity to fix the process: you could STOP writing UI tests at all (James Shore is a proponent and seems to have had success with this). The time saved in UI Automation can be put to better use - exploratory testing. Not all teams will get here.. but if you make it here, you'll never want to go back. You'd be able to deliver more features per unit time.

So what do you need to do: 

  • Get enough user-context to create a library of named actions; called keywords by some. Tests are written in terms of these keywords. Remember What - not how. e.g. EnterUserNameField() or ClickLogin() is bad; ask "WHY?" to chunk up and you should reach Login(username, password)
  • Let testers step into the shoes of the user & shape this interface outside-in. Pair them with a good programmer to ensure you have a "Discoverable API" i.e. easier to figure out on your own given tooling support (e.g. IDE Intellisense).

For UI-less tests,

  • Follow a design technique like MVP or MVVM. Minimize code in the UI.. so that it's easier to test. 
  • Ensure that you do not need the UI to start an instance of your SUT/application. Have a composition-root (e.g. a Main() function where the app comes together) 
  • Abstract out the User-interaction. So you can't pull a MessageBox or ShowDialog() out of thin air in the ViewModel code. You create a Role e.g. User. The production implementation of User will probably pop up dialogs. When you need to test without the UI, you replace it with a fake-object controlled by your test.

Now the preceding bullets are easier said than done for zillion-line legacy apps. For greenfield projects, I find this a very enticing alternative - there is no reason to not build it in test by test. We've crossed out most of the perils of UI Tests.




Feel free to question / enhance / criticize with objective reasons / list pros and cons.... In the words of the Human Torch: "Flame on!"

NUnit vs MSTest - 2011 Edition

I have tried to be as objective as possible. Disclaimer: NUnit user since 2005-06.

Legend:
  • MSTest as an alias for the unit-testing fwk bunded with VS2010 v10.0.30319 throughout this post (although technically it is the just the runner). It's much easier to say than VisualStudio QualityTools UnitTestFramework. For NUnit, I'm using v2.5.9
  • Class-Setup/Teardown - to be executed ONCE before/after ALL tests
  • Test-Setup/Teardown - to be executed before/after EVERY test

Major Differences
Migrating a test suite between the two runners might be non-trivial depending on your test code. MSTest and NUnit are very different in the way they go about running a set of tests. e.g. Consider a set of 2 classes containing tests - Class A with 2 tests, B with just one. If I insert logging at some critical points.. (I am looking at the default behavior - no customization)