Agile Anarchy – Is Agile dead?

With a previous client, we started out in a traditional SCRUM environment. We had a wall with columns, estimations, there was some SCRUMs of SCRUMs, we had bimonthly retrospectives, we were pairing, we had all the traditional checkboxes of Agile checked. What could possibly go wrong?

Pair programming

With all this in place, after one of the team members went on holidays, we realised that no one knew enough about the systems he built to manage to fix a production issue.

In that team, pairs rotated, this shouldn’t have happened. In another team, pairs rotated all the time, and yet the codebase never achieved the level of uniformity you would expect from people working for months together.


Retrospectives were done everywhere, and some descended in general moaning sessions, with no positive proposals. If you ask people what is wrong, they will find something.

Reinforcing positively once every two weeks good behaviors or techniques very rarely resulted in that reinforcement happening.

The daily stand-up

Oh, the stand-ups, the descending in endless conversations about bug 78, or the patient waiting while each developer discuss what was done yesterday when you have no idea of context and no understanding of how it impacts you, or anything else.

The wall

We had walls, lots of them. The team had their wall, another team had a wall of things replicated, the PM had his own “wall” (a powerpoint slide). There was another wall we had no clue about that had our team name on, but wasn’t maintained by anyone. In doubt it was left there.

The wall rarely highlighted anything beyond the next item to work on for each of the isolated columns: QA would take whatever card they had the time to do, and when QA had too much work, well, the column grew.

Statistics were irrelevant, because the columns were changed often enough to make statistics irrelevant. They were still calculated, in the JIRA copy of the wall, because you had to have that too, for performance reviews of POs or what not.


TDD was done sometimes, with infrastructure access, slow running tests, Selenium tests that kept on breaking and cost a fortune in maintenance.

We had plenty of unit-tests, and some even tested code that was real. The other code was either feature-switched, or had been dead but not removed.

We had code coverage, and of course those had to be reported monthly to a head of IT, in an excel spreadsheet. Of course it wasn’t to compare teams to one another, it was for something else. We didn’t use them, but if someone wanted them, we had them. For the part of the tests we could actually verify was under test.


We had poker planning, to judge on the size of each card. Many hours were debated to know if a card was a task, a story, an epic, or business-as-usual. Because, you know, the rest of what we do is not part of the business as usual.

We spent a good 4 hours, in a tense environment, drilling into why things are big or not, and debating with the BA why we said most things were difficult.

Story writing

We had the three amigos, of course. People went to training with competent bdd practitioners. Stories were to be written a first time, as preparation, by the BA, talking to the PO. Then the three amigos would discuss those stories and maybe write some more.

Then it would go into dev, we would have to rewrite most of those in code, at that point things change, and the original stories were rarely still present. Except on the wall, and in JIRA, and in the head of the PO that wasn’t very involved in anything past the discussions with the BA.

Mind you, the PO was sitting next to the BA which was sitting next to the devs. But you need all that proxying, or people would have way too much time on their hands!

Iterations and quick dev?

Oh yes. Iterations were followed. You could release, twice a week, at 5am, as long as someone with the right rights clicked on the button, and was ready to manually rollback and pray in case of issue, of which there were many. So you’d release every two or three weeks, and as most features were incomplete, behind a feature switch. Oh the joy of continuous delivery on very long cycles. I hear they call them release trains now.

What could possibly go wrong?

It’s very clear that Agile has become an orthodoxy. People apply blindly certain processes, ignore improvements because they cannot argue that something had a positive impact or not, and discuss endlessly which gospel to follow and why.

With all this, you’ll understand better the title of this serial. I claim that in most environments, the idea itself of agility, with a lowercase a, has died, has been buried under layer and layer of bricks. That tomb is Agile with a big A, and a big A looks a lot like a pyramid, which happens to be a mausoleum, and the picture illustrating this serial.

Clients are blinded by their belief that agile is a bunch of bricks you can use to suddenly get results, without changing fundamentally your core practices. It’s Agile, it blinds them to the possibilities and the opportunities that an agile transformation could have brought them.

Fixing stupid

We embarked on a journey to fix all this. In the rest of the serial, I’ll be navigating you through the changes we made to become truly agile, why it was amazing, what worked and didn’t. How did it turn out for that client? You’ll have to wait for our journey along the Nile to know the conclusion. All aboard!


VeST Redux – Semantic persistence

In a VeST system, we always implement at least two persistence mechanisms: the simulator and the main, one in-memory and one going to the real system, and they must have the same visible behaviour.

We didn’t want to go down the route of event-sourcing again for our model, it was too simple for what this provided. At the same time, an audit log of all things that happened was invaluable from the get go, both for diagnostics to fit with the developer self-service constraint (which I’ll blog about in the future), and for customer service.

Maintaining separately an audit log, an entity state and messages would have been a mess, so our system evolved into an alternative, which is not dissimilar to models Udi Dahan blogged about in 2009.

An entity

For the purpose of demonstration, we’ll define an entity as being a domain model that can make decisions on executing things.

Let’s take a trivial example. The object model is iffy, but it’s for demo purposes, and it probably doesn’t compile.

public class Customer {
  decimal _balance;
  ICollection<Movie> _rentedMovies;
  public void RentMovie(Movie movie) {
    return _balance < movie.RentalCost
      ? InsufficientFunds()
      : MovieRented(movie);

  public void InsufficientFunds() {
    // we throw because the command should
    // not have called RentMovie if it knew the customer needed to refill money.
    throw new InsufficientFundsException();
  public void MovieRented(Movie movie) {
    _balance -= movie.RentalCost;


One approach to keeping logs of things is to add an ILog somewhere, and spit out lots of strings, and hope someone somewhere will have a use for it.

Many a tool have been created to revert from that text format to stuff you can actually understand, all the way to structured log entries.

This requires you to maintain two different domain definitions of what happened, and mixes the responsibility of writing to a log and of the actual work an entity ought to be doing. As you can imagine, I’m not a fan at all.


Most systems tend to try and retrieve the state of an object, project whatever happened into the variables, and ask the persistence to track what changed and optimise the generic case. This approach has two main flaws: the persistence layer is so generic that it can only optimise state changes, but without enough context to leverage the persistence storage specificities. This prevents the persistence layer from making the best possible decision about how to do concurrency; and the simulator becomes extremely complex to write, as it would try and replicate the full feature-set of your persistence medium, be it an entity framework thing or an ISession in nhibernate.

Adding events to our entity

We ended up with a model not dissimilar to what exists in NEventStore. We rewrite our entity slighlty to split the decision and the projection in two methods.

public class Customer {
  decimal _balance;
  ICollection<Movie> _rentedMovies;
  public ICollection<object> Events = new List<object>();
  public void RentMovie(Movie movie) {
    if (_balance >= movie.RentalCost)
      Raise(new MovieRentedEvent(movie));
      Raise(new InsufficientFundsEvent())
  void Raise<T>(T @event) {
  public void Apply(InsufficientFundsEvent @event) {
    // nowt
  public void Apply(MovieRentedEvent @event) {
    _balance -= movie.RentalCost;

Implementing persistence

Now, our persistence layer can be as simple as persisting the existing type as-is, as we already did the projection.

public class CustomerPersister : IPersist<Customer> {
  public void Persist(Customer customer) {

If you wanted for example to update documents without concurrency, you could then optimise your document database driver implementation to only implement the field that you want. It allows you to decide on the best strategy for concurrency inside your persister, by implementing each event independently.

Composition to the rescue

Once you have such implementation, it becomes trivial to use the Russian doll model to do additional things.

Say you want to publish all those events to a messaging library, it becomes easy.

public class MessagingPersister : IPersist<Customer> {
  IPersist<Customer> _inner;
  ISend<object> _eventPublisher;
  public void Persist(Customer customer) {
    var evts = customer.Events.ToList(); // get a copy
    // everything went well, publish
    foreach(var @event in evts) _eventPublisher.Publish(@event);

All the same, you can now keep an audit log of everything that happened for an entity, by adding another level of composition that writes to a log file.

Semantic persistence

The advantage of semantic persistence over more traditional approaches is that, by providing the actual events describing what happened to the driver, it can make smarter decisions without trying to reverse-engineer the context of an operation from state changes.

In our project, we used this very efficiently to let the mongo driver do concurrency-free updates to parts of documents we didn’t care to have concurrency on, and to reimplement the same in our in-memory simulator, through a retry of the operation by reapplying the bson stream as a value in a concurrent dictionary, until it won. The two implementations couldn’t be different, and yet for consumers and our test rig, they were behaving the same.


Rebuilding – CV

One of the most difficult times in the life of a contractor is having to prepare his CV, profile and all the other things that you need to do to hope getting your next interview. I’ve only recently had to start doing that, as I’m looking for my next contract, and realised my existing CV needed a full rebuild.

The what

I decided to get my CV in YAML form, to contain all the information recruiters, clients and recruitment agents may need. I came up with the following requirements for a good CV.

  • A simple introduction of who I am
  • The list of technologies and practices I’ve used over the years and for which clients, in a form that can be absorbed by most recruiter’s databases, and answer automatically the sacrosanct question of how long I’ve used each of those
  • A highlight of previous clients, with a start date and a length for each contract, with a way to get the whole thing should the user of the CV want that
  • One HTML page that could be viewed traditionally, or in a timeline, or printed
  • Brush off on my SASS/CSS and grids system by making it no-javascript

The first iteration is now live. There is still a bunch of tweaking I want to do, and maybe clarify the text in each of the projects. Role titles are specifically omitted, as it is just easier to have that conversation over the phone, I hope. But if you have tips on how to write this stuff down without started everything with “designed, developed and deployed” that’d be great :)

CSS frenzy

I wanted something as pure from the markup point of view as possible. One notable thing I did was the timeline view, which is not active by default (thanks to early feedback from people). It is all without javascript, by using the magic of CSS.

We start by creating a hidden checkbox, and a label for it that we will use to allow the checking and unchecking to happen by mouse and keyboard without the checkbox itself being visible.

<input type="checkbox" id="enableTimeline" />
<label for="enableTimeline">Timeline</label>

We then add the styling using a combination of inline-block displays for each job, but only when the timeline checkbox is checked.

#enableTimeline:checked ~ .timeline .job {
  display: inline-block;
  width: 200px;

If you want to check out the original, it’s on GitHub