This article is part of my online newsletter called the Scrum Addendum. If you enjoyed this article you should signup for the complete series.
As my understanding of Agile software development has increased, so has the conflict between my established ideas and what I see happening in the real world. The latest casualty was my belief that repeated code or components is bad. My thoughts were challenged by a conversation I had with my friend, Ken Everett. He described the logging scenario that I discuss in this article and argued that a large team using more than one logging framework was not a bad thing provided that the code quality was good and it met the business functionality. I absolutely agree. But I also wondered: when was the latest responsible date at which point a decision needed to be made?
My views on Agile change the more I learn. I was greatly influenced by Bertrand Meyer’s  book Object-Oriented Software Construction  where he presented the case of reusable software components. It could be argued that reusable components are with us now in the form of large frameworks, but my understanding of his vision (reuse of fine grain components) never fully came to pass.
The argument for re-usable components
“99 bottles of beer on the wall 99 bottles of beer! Take one down, pass it around 98 bottles of beer on the wall!” 
When writing software, we often optimise to reduce redundancy. This includes reusing common components such as EJB containers, databases, and frameworks. Removing duplication helps reduce the complexity of the software and hence lowers the cost of change. There is good reason for this: if a change is needed, it makes sense to make the update in a single location rather than in multiple locations.
Scrum teams are cross functional and capable of making independent decisions. So, how do you prevent team members from reproducing functionality? This question can also be asked in a large context: If you have a large team working as a Scrum of Scrums, how do you prevent two teams from re-implementing the same solution multiple times?
Let’s consider a common example.. Many mission -critical applications require comprehensive logging. If we consider only Java frameworks, then there are several very good frameworks to choose from, including Log4J, Java Logging API and Commons Logging .
This leads to some questions:
- What is to stop two separate teams working on the same product from adopting different logging frameworks?
- Is duplication necessarily a sub-optimal solution?
- If not, then when is it appropriate and when is it not?
The Principle of Postponement
“The concept of postponement is increasingly drawing the attention of researchers and practitioners. Postponement means delaying activities in the supply chain until customer orders are received with the intention of customizing products, as opposed to performing those activities in anticipation of future orders.” – R.I.van Hoek 
In software development, the principle of postponement refers to leaving “irreversible” decisions until the last responsible moment. “Irreversible” decisions are costly to undo and they should be made when there is sufficient evidence, or when not making a decision is even more costly.
The costs and benefits associated with postponement are difficult to quantify, and data are often unavailable. There is, however, research that implies there is a positive relationship between the implementation of postponement and company performance . Interestingly, there is a significant relationship between environmental uncertainty and postponement.
How do Agile (and Lean) software teams apply postponement to software architecture and design? Simply by allowing alternative architectures to be explored and only eliminating alternative architectures when it becomes clear that it is more cost effective to do so. More explicitly, this approach would encourage different solutions [by different teams] to the same problem.
The Cost of Change
Postponement directly contradicts many of the arguments for reuse; the most interesting is the cost of change. So, how do you reduce the cost of change when implementing (potentially) many different solutions to the same problem? Agile software development projects solved this particular problem some time ago with the practices of Continuous Integration , Test Driven Development  and , Ruthless Refactoring  and Pair Programming . Kent Becks book  examines the cost of change in quite some detail, and I won’t try to reproduce his work here.
Agile teams that have been disciplined about TDD and have a comprehensive unit testing suite, should not fear code change.
The Latest Responsible Moment
Finally, we come to the question that I originally sought to examine. That is, when is the latest responsible moment? Every situation is different, and there is no cut-and-dry answer to this question. Assuming that the project team is using Agile practices (as mentioned above) to help reduce the cost of change, the decision on which logging framework to use can be postponed for quite some time.
I would, however, like to offer a guideline that can help make the decision. The point at which we need to make a decision, is when we first incur significant work (or rework) in maintaining multiple architectures. This will usually arrive in the form of a request for additional functionality; for example, logging a new metric. It is only at this point that the team should consolidate the architecture prior to adding the new functionality.
 “Object Oriented Software Construction“, by Bertrand Meyer
 “The rediscovery of postponement: A literature review and directions for research“, by R.I.van Hoek, Journal of Operations Management #19, 2001
 “The Application of Postponement in Industry“, by Biao Yang, Member, IEEE, Neil D. Burns, and Chris J. Backhouse
 My article and screencast of “Test Driven Development with Ruby“.
 “Extreme Programming Explained“, by Kent Beck.