The values and principles of the Agile Manifesto are explicit about the need for frequent releases of working software, which mustÂ be iterations of the product, not simply increments of a plan made prior to commencing development. Yet the iterative approach – necessarily includingÂ re-work to featuresÂ that areÂ already delivered – often faces strong opposition from within enterprises, even those that are enthusiastically embracing Agile.
Naturally, waterfall adherents oppose iteration;Â but more insidious opposition comes from people who promote Agile while believing that in “serious enterprise” you need to achieve it while following a clear plan laid out at the start of a project, and that rework means wasted time.Â While it’s often easy to ignore the first group once a business has made a commitment to Agile, the second group can have severe negative impacts on delivery, and on the overall perception of Agile within the organisation.
ThereÂ are many good reasons to deliver software iteratively, but for me the best one is the communication barrier between software delivery professionals and their customers. For all the effort we put into bridging the gap between the delivery team’s tech expertise and the client’s domain expertise, there’s just no substitute for a hands-on demo to tease out what a customer really meant when they described a feature. Picture this scenario (sadly, one I’ve witnessedÂ more than once):
There’s some confusion over a particularly tricky feature, so a meeting is called in order to clarify the requirements. The agenda is laser-focused, and the invite list is as short as possible while including the necessary experts and empowered decision-makers from both the client and the delivery team.
As the meeting starts, confusion reigns, but soon there’s an “Aha!” moment – the delivery team understands what the client meant, the client understands why people from outside their business didn’t understand what they were asking for. The rest of the meeting runs like a well-oiled machine – the delivery team explains the featureÂ in their own words, the client can see that they understand now. Everyone is happy with the solution, and everything is written up in language agreed by the client and the delivery team, and signed off.
Some time later the delivery team excitedly showcases the completed feature to the client and gives them a chance to try it for themselves. An awkward silence falls. The client has questions: “Where is the…”, “How do we…”, “Why isn’t there…”, “Didn’t we all sit in a room a few weeks ago and agree that…”. It turns out that the “Aha!” moment was just one more misunderstanding.
What went wrong? I’m not going to play armchair-psychologist and invoke ideas like the false-consensus or Dunning-Kruger effects, but it seems clear and perhaps not all that surprising that,Â given sufficiently different areas of expertise, two groups can use the same words to mean different things yet believe that they share a commonÂ definition.
You can try to bridge the gap with roles that have some level of expertise in both domains, but even if you can find/train them, that just gets you a telephone-game where the developer builds something because the systems analyst says that the business analyst says that the client says that’s what they want. For a tricky featureÂ this approach is unlikely to succeed, and for a simple featureÂ it’s a waste of resources.
Crucially, in the scenario above, the client recognised that the solution wasn’t appropriate pretty much as soon as they tried it. While a client may not always be able to express what they want in a way that a delivery team can clearly understand, they can generallyÂ tell when something they’re trying to use doesn’t work. In my experience this usually leads to a real “Aha!” moment, and the client can use what was implemented as a reference point to clearly express “It needs to do X” in a way that the delivery team trulyÂ understands.
Having built a somewhat incorrect version of the solution for a couple of weeks, the delivery team now has all the information they need to build the correct version. It’s at this point that projectÂ managers, customers and other stakeholders often say “Well done everyone, we narrowly avoided disaster there, but we’ve wasted two person-weeks – what went wrong?!” To which the correct answer is “nothing went wrong, the two weeks were well-spent”.
Any extra effort you want to pour into requirements/specification up-front to avoid building an incorrect solution would amount to nothing. Once you arrive at the point where everyone falsely thinks they understand one-another,Â people move on and start looking at other problems. So the waterfall alternativeÂ is toÂ put more effort into up-front design, still get it wrong, and only find out much later when much more work has been built on top of the initial incorrect assumptions.
This kind of miscommunication isn’t limited to particularly trickyÂ features. Little misunderstandings can crop up in every aspect of anÂ application, and remain undiscovered until a hands-on demo takes place. Even when they don’t lead to show-stopping bugs, they can mean the difference between a janky, awkward application that the client grudgingly accepts as technically meeting their requirements, andÂ an application that is a pleasure to use.
When these sorts of communications problems occur, building the first, incorrect version of the feature then iterating is simply the quickest way to reach a common understanding and the correct solution, so to call it wasted time is ridiculous. This kind of rework needs to be recognised and anticipated as a normal part of the operation of an Agile project, not treated as some aberrant failure.