Sunday, 8 September 2013

TOGAF++ principle 24: the strength of the monolithic

I’ve commented before on the charmingly Utopian architectural principles that The Open Group Architecture Framework (TOGAF) suggests.

For most of my systems design career, I’ve been taught that it’s a bad thing to lump lots of different functions together into one indivisible monolith of code: that it’s a good thing to separate each useful function into a distinct unit that can be re-used for new purposes: and that those functional units should have explicit interfaces that don’t couple them together permanently in a set pattern. These ideas are the essence of the Composite/Structured Design thinking that Yourdon & Coad promoted in the 1980s. They are the foundation of the Service Oriented Architecture that Microsoft and IBM promoted in the 2000s. They appear so obviously to be true that for twenty-five years I believed them.



Then I found myself in the interesting position of being able to directly compare the adaptability of a nicely partitioned modern systems design with that of an antediluvian software monolith. Two arms of a business, in different territories and with different systems, needed to introduce exactly the same new feature. I was the architect who was asked to design for introducing the same new features into both parts of the business. In the North (let’s say), the functions that were relevant to the new features were split into six distinct software units, running on distinct computing platforms, with explicit and well defined interfaces. In the South, the functions were all in one enormous system.

In the North, I spent some time identifying all the systems involved. I found the people who looked after each one, and the other people who looked after their interfaces. I held meetings with them. None of them, of course, had an overall view of what the six systems, taken together, did: so I had to piece that together from the systems owners’ often conflicting stories. Then, when I’d chosen an architectural solution that required changes to systems One, Three and Five, I had the happy task of persuading their owners, who naturally thought it far better to modify systems Two, Four and Six. We negotiated, and we reached a 
collectively  acceptable solution in the end. It was fun, in a geeky sort of way, but it took a lot of effort from a lot of people.

In the South, I spent a few minutes identifying the one large system involved and the person who looked after it. I held a meeting with him, in which we quickly identified which bits of the system ought to be modified. No confusion. No conflicting stories. No arguments.

Not surprisingly, in this situation, the Southern architecture was perceived by Higher Management as the more flexible of the two, and the thoughtful partitioning that the Northerners had done went unappreciated.

I am not left in awe of the flexibility of monolithic code: I know how intractable it can be. But I’m left with a new understanding of the immense difficulties of getting changes done in a carefully-fragmented systems architecture: difficulties which arise from the association of human beings with computer systems: difficulties which the design evangelists seem to be unaware of.

Statement
Dividing complex functionality into discrete recombinable units may increase the cost of change.

Rationale

It is any architect’s everyday experience that the difficulty of getting a solution accepted increased more or less as the square of the number of software units (and therefore people) involved. It’s like a sinister inversion of Metcalfe’s Law.

Implications
Clearly, we can’t have all the world’s software in one Uluru of a monolith. But equally, we shouldn’t blindly seek to segregate every recombinable function.

2 comments:

Chris Garrett, Sulisco Ltd. said...

That's very well put Clive, an excellent observation of the realities IT change.

May I offer an alternative view? I believe that the Northern design is superior and would suggest that the problem is with ownership. Perhaps the Northerners would have provided a better service had they (or one of them) owned all layers in the stack involved in the feature(s) within their area of expertise.

I'd suggest that you had a better experience with the Southerner because the ownership model was superior rather than their design.

I would have one person owning a feature end-to-end e.g. ownership including the web page, integration layer, interfaces, back-end and database. The layering gives the owner the agility to make changes with ease. Shared underlying components can be more challenging but can be managed successfully. Packaging software by feature rather than layer would minimise this.

Of course, reality depends on the culture and the IT choices of the organisation you are working with. Different models suit different organisations.

Clive Tomlinson said...

Chris is, as usual, right: it was really a problem of ownership. But still, that incident shows that the value of distributed solutions over monolithic ones is very easily overwhelmed by organizational effects.
And design patterns encourage patterns of ownership. Specifically, monolithic design encourages simple ownership, and distributed design encourages complex patterns of ownership.
:-)

Post a Comment