When starting a project, a conscious decision must be made regarding the governance and methodology to be adopted. For many people, the default choice is a serial “Waterfall” process of requirements-gathering followed by design, build, hopefully a bit of testing, and finally deployment, with each phase completed before the next can start and no phase revisited once it has been signed off. This would seem to make eminent sense as we can’t build something unless we know what we’re going to build, right? The problem is that it is rare to have a complete set of requirements up front. No two projects are ever the same and, in reality, requirements usually “appear” only once customers actually start to use the system or product they are asking for (particularly when they start to see what it can actually do). Indeed, when creating something that your customers have never seen before, it is widely accepted that hands-on experience in facilitated workshops will always trump interminable meetings spent discussing hypothetical features captured at length on a document which is out of date as soon as it has been approved.
Another problem with Waterfall is that it does not release value back to the business until the project has finished. Nine to twelve months of expense on a project that has not yet delivered anything will pretty much guarantee a negative rate of return, so we should really look to adopt a methodology that facilitates continual delivery of prioritised iterations of functionality. Such a rapid, focussed deployment can only be achieved if we flip the usual serial activities of a waterfall implementation and run the various stages of requirements gathering, technical design etc. as parallel activities for each deliverable (see the diagram above). In simple terms, this means that we collect basic high-level requirements (“Enough Design Up Front”) at the start of the project in order to enable prediction of a delivery date, the likely cost (+/- 50%), and prioritised scope, and then discover the underlying detail and design, build, and test each component as we go in time-boxed iterations using the most up-to-date business understanding of what is actually required. Such a quick turnaround of features requires focussed effort on only those activities that support delivery, for example collecting and prioritising high-level “user stories” that each represent real business benefit, with system design documentation produced only as each feature is actually delivered.
I am not advocating a “wild west” approach whereby people just get on with whatever seems the most important piece at any particular moment. The adoption of the “Agile” approach described above still relies on detailed planning for the current and next iteration, and an agreed, visible, and evolving “Prioritised Requirements List”, but it doesn’t mandate detailed plans extending too far into the future in an environment where requirements are not fully known or technology not fully understood when it just isn’t possible to predict events with any degree of confidence or certainty. And to mitigate risk and uncertainty, contingency is programmed in to the prioritisation of requirements, with “must-haves” promised, “should-haves” expected, and “could-haves” unlikely but still possible. In other words, time and cost remain fixed while requirements are considered to be variable.
This all requires a subtle change in the way we operate as the collaboration and trust required to make Agile work can only be achieved if the usual command and control structure is dissolved and “blamestorming” outlawed. In addition, cost and time can no longer be considered the key metrics; it is the actual value delivered and continual process improvement that matter in this new world of work. Under Waterfall, the project manager acts as a benevolent dictator who tells people what to do, how long things should take, and often how they should be done. This is OK for a project that is merely a rehash of something that has been done to death before (e.g. a car production line) where the project manager will have enough technical knowledge and experience to micro-manage outcomes. In an increasingly complicated world, however, no one person is likely to have all of the requisite technical knowledge to take absolute control in the majority of projects. Thus, what is now needed is a facilitator who can leave team members alone to autonomously decide, estimate, and undertake tasks: the business provide prioritised requests, the team provides estimates of what can be achieved in the next “timebox”, and the project manager deals with anything that can stop this from happening. Thus, team members’ experience and skills are used and valued and people left to think for themselves, making for a much more enjoyable working environment.