I bet you have no idea who is Kevin Roebuck. Neither did I until I’ve stumbled upon a fantastic title on Amazon: Data Quality: High-impact Strategies – What You Need to Know: Definitions, Adoptions, Impact, Benefits, Maturity, Vendors, which consists of nothing more but — surprise, surprise — copies of Wikipedia articles.
After a quick check I found that the guy has managed to ‘author’ 376 books (and counting!) so far, and all of them (as far as I’ve checked) are using the same pattern: seemingly random collection of Wikipedia articles around a particular theme. These people are so complacent that they even allow you to actually peer inside the volume to make sure that there’s nothing but copypaste.
There are a lot of integration approaches and standards out there. In order to make sense out of the variety I distinguish three successive generations of such standards. As far am I’m aware it is not something commonly accepted, but again I’m not aware of any other sensible commonly accepted classification, so I see mine as being credible enough for causal use.
The first generation is unsurprisingly the oldest one. Standards belonging to this generation started to emerge in dark ages of distributed computing where merely sending chunks of bytes around in a reliable way was considered being an achievement. Thanks to the byte-centric perspective those standards were concerned mostly about structuring those chunks, i.e. what the layout of data is, what fields are, what delimiters are used, etc. Hence, I collectively refer to standards of the first generation as syntax-based integration standards.
Complexity And Postmodernism: Understanding Complex Systems
Disclaimer: I’m not a philosopher, nor am I a physicist, so I can’t really appreciate merits of harsh reviews from these perspectives on Amazon. But I consider myself being a specialist in enterprise software and a systems engineer, so what follows would be probably relevant only to people of my trade.
Solutions requiring more than a single implementation technology could be tricky. An enterprise software project could easily include 4-5 technologies. You could easily have a team working on a Flex front-end, another team developing Java-based backend, an integration team armed with BPEL and numerous connectors to external systems, infrastructure folks who are supposed to come up with specifics of actual deployment, and a couple of contractors looking after legacy MUMPS application, services of which have to be exposed as part of the resulting solution. Throw in a COTS application that have to be adopted to cut development cost, and a mission-critical deadline set to 1st of January when a new revision of regulatory requirements come into play. Quite a challenging environment to work in.
Apparently, Systems Engineering discipline was facing the same challenge since its inception. Even a modest space mission requires coordination of efforts of engineers having together a dozen of specialities, e.g. jet propulsion, optics, telemetry, electrical engineering, ground support to name a few. All these people, albeit being smart, don’t really understand each other’s domains. There was a clear need for partitioning the whole mission scope into work items that could be given to a specialist team. And the result still was supposed to fly well. Sounds familiar?
The answer was a thing called design loop. Essentially it is interleaving of analysis and design activities in order to settle on a particular physical architecture. The prerequisite of the design loop is requirements analysis exercise which yields a set of functional and non-functional system requirements, a different story, but the design loop itself proceeds as follows.
Lean Architecture for Agile Software Development
James O. Coplien, Gertrud Bjørnvig
Wiley 2010 (1st edition)
Agile movement for good or bad is getting more academic traction. This book is such in a good sense: plenty of in-depth analysis and though-provoking insights on the convergence of Agile and Lean. The praxis is not forgotten as well, there are examples and even code snippets, a nerdy paradise.
Apparently there are a lot of confusion over the differences between various architecture roles. I’m repeatedly asked about them in diverse contexts and I’ve found that I keep reiterating the same arguments and replying to the same set of objections. Therefore, I decided to capture my understanding of the matter to refer interested parties to, and there’s no better place to do it.
I’ve put together a simple sense-making framework to illustrate the difference between these three roles. It is definitely not the universal truth, but the framework is sort of working in the context of enterprise IT in all industries I’ve worked for so far: e-commerce, finance, telco, and public transport. My exposure to IT consultancy it too limited to be confident here, but I suspect that there are matching points as well.
The following diagram illustrates the framework and captures the essence of three well-known architecture roles: Technical Architect, Solutions Architect, and Enterprise Architect.
(click to enlarge, licensed as CC-BY)
I’ve just finished reading Lean Architecture masterpiece by James Complien et al. A review is to follow, but the book gives such a refreshing perspective on the essence of Lean, so I can’t stand without formulating my own view on the idea in the light of what I’ve just read.
I couldn’t agree more when authors contrast Lean with Agile. On the one hand, Lean per se is about eliminating waste, and rework is a notorious manifestation of waste along with defects, excessive work-in-progress, unnecessary movement, etc. On the other hand, Agile practitioners clearly state that responding to change is superior to following a plan. Moreover, new and changed requirements are welcomed, even late in development, of course, at expense of rework.
Arguably, the most prominent and well-developed pillar of Agile is being able to respond to change. Even a naive application of prioritized backlog planning, short time-boxed iterations and frequent releases with demonstrations can do wonders in many situations. However, this feedback-driven development leads to major rework as the price you pay for accommodating changing requirements, but rework itself doesn’t contribute to the value stream, and hence it isn’t Lean. This is exacerbated to a greater extent when one needs to scale Agile development beyond a small project being developed by a small co-located team. A common belief that you can always refactor and introduce a solid architectural foundation that would enable less wasteful changes is a myth. Time ‘hardens’ systems. Authors of the book quote a fascinating metaphor comparing architectural refactorings to stirring a bunch of cement: eventually, it sets, and you can’t stir it anymore.