I’ve commented before on the charmingly Utopian architectural principles that The Open Group Architecture Framework (TOGAF) suggests.
For most of my systems design career, I’ve been taught that it’s a bad thing to lump lots of different functions together into one indivisible monolith of code: that it’s a good thing to separate each useful function into a distinct unit that can be re-used for new purposes: and that those functional units should have explicit interfaces that don’t couple them together permanently in a set pattern. These ideas are the essence of the Composite/Structured Design thinking that Yourdon & Coad promoted in the 1980s. They are the foundation of the Service Oriented Architecture that Microsoft and IBM promoted in the 2000s. They appear so obviously to be true that for twenty-five years I believed them.
Then I found myself in the interesting position of being able to directly compare the adaptability of a nicely partitioned modern systems design with that of an antediluvian software monolith. Two arms of a business, in different territories and with different systems, needed to introduce exactly the same new feature. I was the architect who was asked to design for introducing the same new features into both parts of the business. In the North (let’s say), the functions that were relevant to the new features were split into six distinct software units, running on distinct computing platforms, with explicit and well defined interfaces. In the South, the functions were all in one enormous system.
In the North, I spent some time identifying all the systems involved. I found the people who looked after each one, and the other people who looked after their interfaces. I held meetings with them. None of them, of course, had an overall view of what the six systems, taken together, did: so I had to piece that together from the systems owners’ often conflicting stories. Then, when I’d chosen an architectural solution that required changes to systems One, Three and Five, I had the happy task of persuading their owners, who naturally thought it far better to modify systems Two, Four and Six. We negotiated, and we reached a collectively acceptable solution in the end. It was fun, in a geeky sort of way, but it took a lot of effort from a lot of people.
In the South, I spent a few minutes identifying the one large system involved and the person who looked after it. I held a meeting with him, in which we quickly identified which bits of the system ought to be modified. No confusion. No conflicting stories. No arguments.
Not surprisingly, in this situation, the Southern architecture was perceived by Higher Management as the more flexible of the two, and the thoughtful partitioning that the Northerners had done went unappreciated.
I am not left in awe of the flexibility of monolithic code: I know how intractable it can be. But I’m left with a new understanding of the immense difficulties of getting changes done in a carefully-fragmented systems architecture: difficulties which arise from the association of human beings with computer systems: difficulties which the design evangelists seem to be unaware of.
Dividing complex functionality into discrete recombinable units may increase the cost of change.
It is any architect’s everyday experience that the difficulty of getting a solution accepted increased more or less as the square of the number of software units (and therefore people) involved. It’s like a sinister inversion of Metcalfe’s Law.
Implications Clearly, we can’t have all the world’s software in one Uluru of a monolith. But equally, we shouldn’t blindly seek to segregate every recombinable function.