VeST Redux – Test rigs and external APIs

In my previous post, we touched on the concept of test rigs, reusable tests that can be ran against many implementations. When using VeST for your systems, you’d build all of those components alongside your mains.

Usually, developers implement this locally using a mocking approach or in-place stubs, one call at a time. When applying VeST, that is a solved problem for your own components, and you can do the same for the service you’re trying to interact with. Doing it the VeST way allows you to implement your understanding of the API, and encode in code the documentation of the service.

But, because you’re now in charge of implementing things that are implemented by the real API, you are encoding a bunch of expectations, and reality strikes again: the service doesn’t quite work the way it’s documented. Or maybe it has a bug. You have no way of knowing if your in-memory implementation works the way the service does if you only rely on documentation.

One way to validate your understanding is to implement a test suite against your in-memory implementation, which also run against the real one, which I call a test rig. That allows you to test the exact same specification against two systems and ensure they agree by testing the results.

To do so you can use a pattern using drivers, which I’m sure you’ve seen before.

public abstract class Movie_specification<T> where T:IMovieGallery, new() {
	IMovieGallery gallery;
	public Movie_specification() {
	  gallery = new T();
	}
	/* implementation left out for brevity */
}

[TestFixture(typeof(MongoMovieGallery))]
public class Rent_a_movie<T> : Movie_specification<T> {
	public Rent_a_movie() {
	  given_a_movie_available_for_watching(Movies.StarwarsPart4);
	  given_a_user_account(UserPersonas.Bill, login: true);
	  when_renting_a_movie(Movies.StarwarsPart4);
	}
	[Test]
	public void movie_is_playing() {
	  ui.player.IsPlaying.IsTrue();
	}
}

And voila, your current codebase can now test your understanding of an API and it’s actual implementation.

As your tests start using more components, you can replace the T in those tests and contain all your dependencies in a driver class instead.

// going to mongo by default
public class MovieRentalDependencies {
	public Func<IMovieGallery> MovieGallery = ()=> new MongoMovieGallery();
}
// going to in-memory bson implementation
public class InMemoryMovieRentalDependencies : MovieRentalDependencies {
	public InMemoryMovieRentalDependencies() {
		MovieGallery = ()=> new InMemoryMovieGallery(new BsonSerializer());
	}
}
// updated test context
public abstract class Movie_specification<T>
							  where T: MovieRentalDependencies, new() {
	/* as above */
}

To ship your test rigs, you package them in nuget, and your consumers can now write their own implementations of IMovieGallery with a simple one-liner in their test project.

public class PlexMovieGallery : Rent_a_movie<PlexMovie> {
	// etc
}

There are other ways to shave that yak, but this is the simplest and fastest for a few test specifications.

Ads

VeST Redux – Components, mains, simulators and test rigs

My introduction post, from way back when, focused on the idea that testing each class independently in the conventional TDD way had significant costs, and that I preferred to only test components that I expect to be used or replaced, or are out of the control of my team, or have some independent usage interface.

To achieve this, it’s important to understand what the boundary of a system is. Depending on your modeling choices, it could be an entity, it could be a subsystem, and some people even split this by RPC call.

Whatever your model, I apply the term “component” to mean any system that reacts to inputs and communicate with outputs over known contracts. This relates of course to many existing nomenclatures, but focuses on the idea that, however many classes and bits and bobs exist in a system, said system should exist logically as an independent cluster of functionality, with clear inbound and outbound boundaries. You will recognise the model from your traditional hexagonal, or plugs and adapter, architectures, as defined by Alistair Cockburn.

To reduce the friction caused by traditional class-driven TDD, I tend to test each of those clusters as black boxes, by simulating the inputs, and building test rigs and simulators for the outputs. Note that input and output here is used very liberally, as many outputs also tend to provide inputs to the system.

As a drawing is worth a thousand words, here’s a little diagram of what i mean.

Our component, which is usually a cluster of many classes, is a functional unit that does things we find useful. It is usually triggered through an interface, which I call “usage interface” here, and covers both UI inputs, times, and other external system triggers. I represented one inbound plug, but as you can imagine, there are usually many.

On the right side, we have what this component needs to communicate with, say, an external system, a database, a file system, a log file, whatever.

The goal of designing the system in this way is to reduce reliance on on-the-spot mocks, kill interaction testing if it has no visible benefits, and allow both ourselves and the consumers of our APIs to start testing against our systems as quickly as possible.

Other component our component-under-test uses has a contract, be it HTTP, a .net interface, or some wsdl somewhere. But relying on contract definitions only is rarely enough. To encode additional expectations, we need to encode the knowledge in code, as the single source of truth.

The mains in the diagram is an implementation of the contract on top of the system we actually want to talk to.

The simulator is another component, usually running in-memory, that encodes all the behaviours that we understand about the contract. Very often, APIs have idiosyncrasies that are not reflected in their description formats, and more often than not, that knowledge gets lost in the usual turnover our teams suffer at the hand of short-sighted resource planners. An example here would be an in-memory module that simulates the semantics of mongodb’s driver, but ensure any documet gets serialised to BSON.

The test rig is an encoding of our expectations of the contract for anyone implementing another main or another simulator. This is a set of reusable tests that others can use to make sure their implementations behave in the way that is expected by our system, aka respecting the contract, both as encoded in code, and encoding behavior as described in prose.

And of course, the goal is to ship the mains, the simulator and the test rig, and use in our own development the test rig to make sure the simulator and the mains implement the same contract.

In followup articles, I’ll give examples of how we can build that in .net.

Ads

Agile furniture building

After the sudden loss of a friend to cancer, I took some time off posting on here. I’m back, so you shall have your daily dose of Seb again from Monday.

When redesigning this blog, I wanted more graphic content to illustrate my ramblings, and chose a picture of me carrying a dismantled desk, with colleagues behind me either bemused or aghast – feel free to provide a better caption in the comments :). So what is this all about?

One of my previous clients could be described as an organisation that attempted, for a while, to transform itself from the enterprise sclerosed big corporation into a fast and agile environment. Like many organisations, some people got the memo and got on-board, some trailed behind, and this included shared services, a.k.a. the people with the power to do things to desks.

The desks we were given had partitions, and I really have a dislike for those. You can’t talk easily to your teammates, they encourage more clutter, they get in the way of communicating. YMMV but I don’t find them useful inside the team, especially those half-height “we’ll keep your cats pictures private and force you have to dislodge your neck to talk to anyone but won’t protect you from the rest of the office noise” things, they’re just plain retarded. I know, I may be sitting on the fence on this one, but there.

So we put a request to get them removed. Then we waited. A week goes by, a second, a third week and still nothing. Eventually, one of the managers finally told us it could not be done.

Organisations don’t like to try things they never tried before, they despise anyone not fitting the norm they set out, and this applies across the board: to projects, people, and desks.

So we did what any good disrupters would do in a wannabe agile organisation: we analysed the problem by looking at how the desks were put together, discussed solutions, brought in the right tool for the job, and prototyped the removal on one of the desk, in a time-boxed fashion, to confirm that it was indeed completely feasible.

Once the first prototype was done, all other desks followed suit, and we had a much better time in that part of the office, being able to collaborate with our colleagues. I’m sure we brushed many feathers, but that’s what you do when you adopt agile.

As it goes, after I left, the partitions came back up, the team offices got closed, projects got cancelled, so who knows what happened to that agile transformation.

The morale of the story is, if you have trouble getting a partition removed between two desks, beware of the capacity of an organisation to become agile. Change where you work or change where you work.

Ads