I was pleased to find recently that Roy Osherove’s Art of Unit Testing was available on Safari.  I have been following Roy’s blog for a while now, and was quite excited at the prospect of him writing a book on Unit Testing.  It was only my personal cheapness that kept me from shelling out the $25 to get the E-Book version from Manning ahead of time.  I have to say, now that I have read it, that it would have been well worth the money.  Before I get too deep I want to provide some context for what I am about to say.

I consider myself an experienced TDD practitioner and Unit Test Writer
So that means that I was reading this book hoping to gain some insight.  I wanted to find out how to write better, more readable, more maintainable tests.  I was also hoping for a little bit of “atta-boy” affirmation that the way I do things is the “right” way.  The astute reader may be able to tell that in order for my first hope to be true, the second may have to get some points taken away.  This was in fact the case, and to be honest, coming out of it I feel like I’ve gotten more value from the things I’ve learned than I received from whatever ego stroking may have occurred with what I am currently doing right.

So lets get started….
I was expecting the book to start out essentially as it did, some brief history about the author and an introduction to Unit Testing for those who may not be familiar with it.  I have to say I was expecting the book to be a little more TDD-centric than it was, but I think most of that was my own bias for TDD as “The Only Way To Write Software”.  Roy actually explained what TDD was, and also why he wasn’t going to harp too much on it throughout the book.  I have to say, I can see why he made the decision that he did.  I can also say that it seemed perfectly clear to me that TDD is a technique that he feels has a lot a value, which made me happy.  Since this is supposed to be a review from the perspective of an experienced practitioner of TDD and Unit Testing, I’m not going to go into anything that was touched on in the early chapters, apart from noting that they contained a general introduction to the tools, technique and philosophy of unit testing.  I can also say that, though I was already familiar with the material, I didn’t mind reading through it at all.  Overall, Roy’s writing style was light and quite pleasant, even for a technical book.

And now into the meat of the book…
For me, things started getting interesting in Part 3 of the book.  This is where issues of test design and organization are addressed.  This is one of those areas that I feel like I need some guidance on, mostly because I developed my testing idioms mostly through habit, and trial and error.  I look back on tests I have written in the past (which could be as little as two days ago) and I wonder how I could have come up with such a brittle, unmaintainable nightmare.  I feel like I need guidance from the experts on what I can do better when writing my tests.  Roy delivered on these items in chapter 7 “The pillars of good tests”.  One of the lessons I took away from this was the value in testing one concept per test.  I had heard this as “one assert per test” in the past, and scoffed at the idea.  But Roy presents a very compelling argument for why this is a good idea, if you are testing multiple concepts, you don’t know the extent of the problem when your test fails.  And lets face it, the failing test is the reason we’re doing this whole thing.  I’ve seen personally the failing test that just keeps failing.  You tackle the issue from one failed assert only to rebuild, and find one right after it which fails as well.  One of the issues I’ve had with this is the redundant setup and configuration that could be required for exercising this concept, but this issue is also addressed by the straight forward recommendation of creating clear and understandable configuration methods.  In the past I have generally not been really good about applying DRY to my test setup, which, I know, is another case of treating tests differently from regular code.  Having someone in a position of authority (like Roy) say, “put your setup in separate methods so you can re-use them and make your tests more readable” made it okay to do the thing that I knew I should be doing anyway.  The key concepts covered are making tests readable, maintainable, and an accurate portrayal of the authors intent.

Even more in depth….
Section 4 goes even further and talks about how to integrate unit testing into an organization which is not already doing it.  This is an interesting subject to me as I have recently moved to a company which has not been doing unit testing and TDD as part of their regular development process.  Roy draws on his experiences as a consultant to provide some really good advice for how to go about enacting this sort of change in an organization.  I particularly pleased with his candor when he describes his failed attempts at integrating unit testing.  It would have been quite easy to simply say “Based on my considerable expertise, these are the things you need to do”, but he chooses instead to share some real-world experience in a straight forward way that only adds to my respect for him as a professional.  In addition to this, he touches on techniques for integrating testing into “legacy” code (i.e. code which is not tested).  He does a good job at introducing some techniques for testing what is essentially untestable code, which a very large nod at Michael Feathers’ “Working Effectively with Legacy Code”.

The book ends with three appendices, one discussing the importance of testability in the design process, one listing vairous testing tools (both Java and .Net), and the last listing guidelines for conducting test reviews.  This last one is nice, because it presents a concise view of all of the guidelines presented throughout the book, and provides page references where you can get the “why” behind each. 

All in all…
This is a really good book, which should be part of any agile development library.  It doesn’t matter if you are writing your first unit tests, or you’re a seasoned pro, there is going to be something here for you.  I think it is great that Roy has chosen to share his experience with the developer community in this way.  I came into this book with some rather high expectations and I think they were met.

A note on TypeMock….
I remember seeing some criticism floating around on twitter suggesting the book was rather pro TypeMock.  There was also the comment that Roy’s affiliation with TypeMock was not made clear early on.  I can’t say I saw either of these things when I was reading it.  For starters, I already knew Roy worked for TypeMock, so perhaps that skewed my ability to objectively judge if the disclosure was done in a timely manner or not.  I can say that the places in the book which there seemed to be a preference for TypeMock were places where he stated things like “I feel TypeMock has a better syntax in this case”, or “TypeMock is the only tool with provides these capabilities”.  For starters, the first is a statement of preference.  Sure Roy helped design the API for TypeMock, so it seems only natural that he would prefer it to other frameworks, but having used it I would have to agree with the statement.  It is a great API, and example if a fluent interface done well.  The second comment is also plain fact.  Of the mocking libraries available in the .Net space, TypeMock is the only one that allows you to swap instances of objects in place, without making changes to the classes using them.  You can argue over whether or not this is a good or a bad thing, but the fact remains that it is a feature specific to TypeMock.  Maybe I was expecting something more blatant and obvious, but I just didn’t see it.


Unit Testing ASP.NET? ASP.NET unit testing has never been this easy.

Typemock is launching a new product for ASP.NET developers – the ASP.NET Bundle – and for the launch will be giving out FREE licenses to bloggers and their readers.

The ASP.NET Bundle is the ultimate ASP.NET unit testing solution, and offers both Typemock Isolator, a unit test tool and Ivonna, the Isolator add-on for ASP.NET unit testing, for a bargain price.

Typemock Isolator is a leading .NET unit testing tool (C# and VB.NET) for many ‘hard to test’ technologies such as SharePoint, ASP.NET, MVC, WCF, WPF, Silverlight and more. Note that for unit testing Silverlight there is an open source Isolator add-on called SilverUnit.

The first 60 bloggers who will blog this text in their blog and tell us about it, will get a Free Isolator ASP.NET Bundle license (Typemock Isolator + Ivonna). If you post this in an ASP.NET dedicated blog, you’ll get a license automatically (even if more than 60 submit) during the first week of this announcement.

Also 8 bloggers will get an additional 2 licenses (each) to give away to their readers

I’m currently finding myself in the midst of an evolutionary change.  And I’m not talking about my super-human mutant powers, I’m talking about the way I’m thinking about solving a specific set of problems.

Let’s start with a sample….Lets take something like processing credit-cards as a benign and IP free place to start.  As a subject that I really have no practical experience with, it seems like an appropriate choice.  I’m going to assume that there are different rules for doing checksum validation on credit card numbers, depending on what the card is (Mastercard/Visa/Discover/etc). Now, here is evolutionary step 1: Use a basic case statement to process the various cards.  Here is what something like that would look like:

switch(CardType)
{
    case CardType.MasterCard:
        CardValidators.MasterCardValidator(card.CardNumber);
        break;
    case CardType.Visa:
        CardValidators.VisaValidator(card.CardNumber);
        break;
    case CardType.Discover:
        CardValidators.DiscoverValidator(card.CardNumber);
        break;
}

 

This looks pretty straight-forward, and as it stands it isn’t too bad from a maintainability stand point.  But what happens when there are many different types of cards?  An then what happens when you find a large amount of duplication between the validation functions?

Well, any student of GoF should be able to tell you that a Chain Of Responsibility pattern looks like a perfect fit for this sort of scenario.  So, evolutionary step 2: Create separate classes to handle the different types of validation, and configure them in a Chain of Responsibility pattern where each instance decides for itself whether it can process the input.

Here is a quick and dirty look at what something like that would look like:
ClassDiagram1

 

The two most interesting things here are the GetValidator() method in the AbstractCardValidator, and the individual CanValidate() methods in the concrete implementations.  What this does is it allows each class to decide for itself how it is going to determine whether or not it can be used as a validator for a specific card (thats the CanValidate() part), and also provides a single point which the consumer of the API can use to get the validator for the card instance they have.  You would probably want to build and Abstract Factory around this, which would instantiate all of the ICardValidator classes, and then run the GetValidator() method to get the correct one.

Now we are at a point where things are looking pretty good; we’ve got the ability to do some fairly complex logic to make the decision about which validator to use, and we have a way to simply ask for one, and the correct one appears.  Pretty cool.

This is actually the place where I have found myself in the not to distant past.  I have previously been perfectly content with this arrangement, and been fairly happy with the separation of concerns among the classes…I mean, after all, who better to decide whether or not a specific class should be used to validate a card than the class itself.  So what is the issue?  Well, recently I have become aware of two problems with this arrangement: Tight Coupling, and a violation of the Single Responsibility Principle.  Let’s start with the first:

Tight Coupling
The credit card example may be a bit contrived when it comes to this issue, but bear with me.  Overall, the issue is that specific instances of ICardValidtor objects are being created and handed around.  The use of an interface and an Abstract Factory pattern would actually help the situation out some, but effectively all it does is move the coupling from the consuming class to the Factory (okay, it also consolidates coupling to a single class, which makes maintenance a lot easier).  As I said, contained, but still there.  It would be nice if the factory didn’t need any knowledge of what concrete instances of ICardValidator were out there.  Before we tackle that, though, lets also look at the second issue:

Violation of the “Single Responsibility Principle”
The SRP states that a class should have one, and only one, thing it is responsible for.  Sounds pretty easy doesn’t it?  The problem is that this can be difficult to obtain without a fair amount of discipline.  The violation of SRP which I’m seeing is that the ICardValidator is responsible for both validating a credit card and determining which validator is appropriate.  But wait!  Didn’t I just say that moving this check into the ICardValidator instance was a “Good Thing”?  Well, lets go as far as saying it is better than the previous method, but still not perfect.  Applying the SRP would move the task of selecting a validator from the ICardValidator instance, and put it on it’s own somewhere.  So, thusly we come to our:

Inversion Of Control Container.
That’s right, we are now going to get crazy and move the responsibility of creating these instances to another component all together.  The nice thing about this is that it allows us to move all of the knowledge about dependencies off somewhere else.  How does this apply to this example?  Well, lets assume we have an object of type Card which requires as a dependency an instance of an ICardValidator.  We’ll also assume that Card is subclassed based on the type of credit card.  It now becomes trivial to configure our IoC container to supply a specific implementation (read sub-type) of ICardValidator for each implementation (again, read sub-type) of Card.  Now, when you want a Card instance, you ask the IoC container for one, and depending on what type of card it is, you will get the appropriate ICardValidator as well.

What’s the catch?  Well there is some additional complexity which will show up somewhere in the application due to the IoC, but typically IoC configuration can be delegated down to the configuration file level, so even then the ugliness is pushed away to it’s own dark corner.

But wait!  Why should we have different instances of Card?  What if the Card class is just a container for the card data?  Well, our Ioc still gives us some advantages.  If we look back at our first example with the switch statement, we’ve got a nice CardType enum, which could be a property of our Card class.  Using an IoC container like the one provided by the Castle project, you have the ability to configure a key string for your instances.  This would make it trivial to map the enum choices to specific keys within the container, which the Card class would use to get an ICardValidator instance.  This would also make it possible to make the validators slightly more advanced by adding something like a Decorator pattern, in which specific aspects of the validation could be factored into separate classes, and then “stacked” to produce the final validation logic (This is the same concept used by the Stream classes in .Net and Java.  You can modify the behavior of a stream by passing it to the constructor of a stream with different behavior).

It is definatly worth mentioning that there is a sudden appearance of tight coupling to the IoC container itself from our consuming classes.  You probably want to try to abstract away the fact that the IoC container exists from the majority of the application.  Factory classes go a fair ways in making this happen, but another good idea is to introduce a single service to do type resolution.  The Factory classes can then ask this service for the object they want, and they never need to know the IoC container is there.  This approach also gives you the ability to create some objects using IoC and others in another (more traditional) way.

So is this it?  Have I finally found the answer I’ve been looking for?  It’s hard to say right now.  For the time being this is a decent way to handle things, provided the complexity of the underlying system, and the need for loose-coupling are both high enough to justify the additional complexity of the IoC.  But who knows, in another couple months I may find something new, or even something old, which seems better, cleaner, simpler.  That, after all, is my final goal….And I need to remind myself of that regularly, lest I become complacent.