I had a discussion the other day about whether a sensible architecture can emerge provided that some sort of iterative development is taking place, allowing for rework to certain extent, but without any up front design going beyond the scope of current iteration. As it usually stands, all the best arguments come when you’re trying to fall asleep the following night, so this post is an a posteriori attempt to present a consolidated opinion on the matter.
Firstly, coming to terms. Architecture, as prominent international standard stipulates, is a property of a system in contract to architecture description, which is a separate work product. Hence, any system — as opposed to simply a random collection of items — has an architecture, regardless of the way it came into existence. This effectively dissolves the question, but doesn’t give any practical insight. To keep the discussion relevant, I’m reframing the question as follows: can an iterative development process without any up front design going beyond the scope of current iteration produce a system with good architecture?
To take this further, let’s assume simple operational definition of what good architecture is. There are two aspects: fitness (system meets all expected requirements) and adaptability (system could meet unexpected requirements).
Now let’s paint the picture. A developer works in short (compared to length of the development lifecycle) time-boxed iterations. Each iteration increments a set of requirements that the system has to meet, which normally, but not necessarily, happens by increasing the complexity of the system, e.g. by adding new elements or by rearranging existing ones. At the beginning of each iteration the set of requirements changes, again, normally, by introducing new ones. At the end of the iteration the system is validated against the whole set of requirements accumulated so far. If the system is unfit, a step back is taken and some degree of rework is applied.
Note that given narrative could describe agile development on both micro- and marco-scale. By micro-scale I mean a sole programmer or a pair working at their desk in a classical test-driven development exercise, where the system is some chunk of code, the set of requirements manifests itself as unit test suite, and iterations are measured in minutes. Marco-scale, on contrary, is when a team of developers works on an application as a whole adhering to, say, bi-weekly iterations cadence, and the set of requirements is defined as a bunch of user stories.
Does it sound familiar? Well, it does ring the bell for me. I believe, this resembles natural selection as described by Darwinian theory of evolution. Or, as von Bertalanffy could have said, there’s a homology between natural selection and incremental systems development process.
I reckon this statement requires some clarification. Unless sufficient similarity in underlying principles is demonstrated, it could be regarded as merely a superficial analogy. My considerations are as follows. In both cases you have continuous adaptation process seeking to achieve the fitness of the ‘solution’ in a gradually changing surrounding of constraints. A system being developed has to meet the requirements; a population merely survives in hostile environment. Furthermore, in both cases the progress could be described as ‘myopic’, i.e. no consideration for future beyond the immediate necessity is given. Developers do not anticipate upcoming requirements — this is the constraint we put in place; natural selection doesn’t think at all (No, I don’t believe in ‘intelligent design‘ as far as origin of species is concerned). Finally, in both cases unfit outcomes fail to propagate themselves into the next cycle. If the system fails to meet revised requirements, it gets changed until it does; struggling population either evolves or gives way to fitter species.
Therefore, I believe the underlying systemic principle of the natural selection applies to iterative development as well. Consequently, certain things we know about the former could be applicable to the latter.
What we know about the natural selection is that it is myopic. It optimises only locally, selecting among nearest options on each step. Our organisms are compendiums of fascinating examples of what happens when ‘an architecture’ is left to ’emerge’. To illustrate the point:
The human genome includes five copies of the gene that produces beta-globin [protein that is responsible for oxygene-carrying capacity of blood – IL]. In the middle of these genes is a stretch of genetic code that clearly was once a sixth beta-globin gene. But this so-called pseudogene now contains mistakes that prevent it from producing RNA that can be transcribed into beta-globin protein.
Here, not only we witness code duplication, but also dead code branches as well. And, arguably, DNA has quite a decent code coverage in ‘integration testing’ of ontogeny — development of an individual organism. Another example:
[The] light-sensing structure [of vertebrate eyes — mammals, reptiles, etc.], the retina, is wired up back-to-front, with the light-sensitive cells behind the nerves and blood vessels that support it. Not only does light have to pass through this layer first, obscuring the image, but the
nerves and blood vessels have to dive through the retina, creating a blind spot in each eye.
In cephalopods, such as squid and octopuses, the eyes are built the “right” way around, so why not
in vertebrates too? The answer is that when eyes first evolved in the ancestors of modern
vertebrates, the retina arose from an in folding of the developing brain, and the cells that could
form light receptors happened to end up on the inside of this fold. “Once you have done something
like this it’s very hard to change,” says Michael Land, a specialist in eye physiology at the
University of Sussex, UK.
Contemplate the iconic, oh so familiar, halfway house of being stuck in gradual transitioning, like switching up from EJB 2.1 to 3.0 with spells of 3.1 (In fact, all those examples are the reason why I don’t believe in intelligent design of species — the concept of omnipotent and omniscient God might have its merits, but the concept God making smelly designs is laughable).
The lesson for us, I guess, is that emergent architectures definitely can come up with an innovative solution, but they do awful job in cleaning up the leftovers of the past. It’s just becomes very expensive to completely switch to a better solution at once within constraints of an iteration. The more complex the solution is, the bigger grows your evolutionary, or, as software engineers call it, technical debt. And, as we all know too well, as the technical debt grows, the progress gets slower and architecture becomes less adaptable.
The bottom line. An architecture will inevitably emerge if continuous rework is applied to the system under development. This would not necessary be the optimal architecture, but it will be fit for purpose. The inherent problem of emergent architectures is that the technical debt tends to accumulate as system naturally evolves. This reduces system adaptability up to a point of impeding any further progress. Hence, there is a limitation of complexity of the system that is safe to leave for mercy of emergent architecture.