If you work in software – or even if you don’t – it’s likely that, at some point, you’ll find yourself working with a team who are completely unfamiliar with your systems. A management consultancy, a development partner, a new supplier. A team of smart, capable people who are hoping to work with you to deliver something, whether that’s process improvements or a reduction in costs or some shiny new product.
A common pattern here is that, at the start of the engagement, they’ll appoint some business analysts to spend a whole lot of time talking with people from your organisation to get a better idea of what it is you do. They sometimes call this ‘gathering requirements’. You’ll know it’s happening when you get half-a-dozen invitations to four-hour ‘workshops’ from somebody you’ve never met, normally with a note saying something like ‘Hey everyone! Let’s get these in the diary!’
Now, there’s a problem here. Asking people what’s happening is almost never the best way to find out what’s actually happening. You don’t get the truth, you get a version of the truth that’s been twisted through a series of individual perspectives, and when you’re using these interviews to develop your understanding of an unfamiliar system, this can lead to an incredibly distorted view of the organisation. Components and assemblies aren’t ranked according to cost, or risk, or complexity. They’re ranked according to how many hours a day somebody spends dealing with them. And when you consider that in these days of scripted infrastructure and continuous deployment, a decent engineer can provision an entire virtual hosting environment in the time it takes to deal with one customer service phone call, what you end up with is a view of your organisation that ranks ‘phone calls’ equal to ‘hosting environment’ in terms of their strategic value and significance.
When you factor in the Dunning-Kruger effect, the errors and omissions, the inevitable confusion about naming things, and the understandable desire to manage complexity by introducing abstractions, you can end up with a very pretty and incredibly misleading diagram that claims to be a ‘high-level view’ of an organization’s systems.
There’s a wonderful example of this in neurology – a thing called the ‘cortical homunculus’; a distorted representation of the human body where the various parts of the body are magnified based on the density of nerve endings found therein. Looks like this:
It’s recognisably human, sure. But it’s a grotesque distortion of what a human being actually looks like – brilliant for demonstrating neurology, but if you used it as a model when designing clothes or furniture your customers would be in for one hell of a shock. And we know it’s grotesque, because we know what human beings are supposed to look like – in fact, it’s the difference between the ordinary and the grotesque that makes these cortical homunculi interesting.
The problem with software is that it’s made out of invisible electric magic, and the only way to see it at all is to rely on some incredibly coarse abstractions and some very rudimentary visualisation tools.
Imagine, for one second, that we’ve hired some consultants to help us design an aircraft. They send over some business analysts, and book some time with the ‘domain experts’ to talk over the capabilities of the existing system and gather requirements. The experts, of course, being the pilots and cabin crew – which, for a Boeing 747 like Ed Force One, is three flight crew and somewhere around dozen cabin attendants. They spend a couple of very long days interviewing all these experts; maybe they even have the opportunity to watch them at work in the course of a typical flight.
And then they come up with this: the high-level architectural diagram of a long-range passenger airliner:
Now, to an average passenger, that probably looks like a pretty reasonable representation of the major systems of a Boeing 747. Right? Take a look. Can you, off the top of your head, highlight the things that are factually incorrect?
That’s why this diagram is dangerous. It’s nicely laid out and easy to understand. It looks good. It inspires trust… and it’s a grotesque misrepresentation of what’s actually happening. Like the cortical homunculus, it’s not actually wrong, but it’s horribly distorted. In this case, the systems associated with the cabin attendants are massively overrepresented - because there’s 12 of them, as opposed to three flight crew – so 400% more workshop time and 400% more anecdotal insight. The top-level domains – flight deck, first class, economy class – are based on a valid but profoundly misleading perspective on the systems architecture of an airliner. The avionics and flight control systems are reduced to a footnote based on three days of interviews with the pilots, somebody with a bit of technical knowledge has connected the engines to the pedals (like a car, right?) and the rudder to the steering wheel (yes, a 747 does have a steering wheel), the wings are connected to the engines as a sort of afterthought…
Now, when the project is something tangible – like an office building or a bridge or an airliner, it won’t take long at all before somebody goes ‘um… I hate to say it, but this is wrong. This is so utterly totally wrong I can’t even begin to explain how wrong it is.’ Even the most inexperienced project manager will probably smell a rat when they notice that 20% of the budget for a new transatlantic airliner has been allocated to drinks trolleys and laminated safety cards.
But when the project is a software application – you know, a couple of million moving parts made out of invisible electronic thought-stuff that bounce around the place at the speed of light, merrily flipping bits and painting pixels and only sitting still when you catch one of them in a debugger and poke it line-by-line to see what it does – that moment of clarity might never happen. We can’t see software. We don’t know what it’s supposed to look like. We don’t have any instinct for distinguishing the ordinary from the grotesque. We rely on lines and rectangles, and we sort of assume that the person drawing the diagram knew what they were doing and that somebody else is looking after all the intricate detail that didn’t make it into the diagram.
And remember, nobody here has screwed up. The worst thing about these kinds of diagrams is that they’re produced by competent, honest, capable people. The organization allocates time for everybody to be involved. The stakeholders answer all the questions as honestly as they can. The consultants capture all of that detail and synthesise it into diagrams and documents, and everybody comes away with the satisfying sense of a job well done.
That’s not to say there’s no value in this process. But these kinds of diagrams are just one perspective on a system, and that’s dangerous unless you have a bunch of other perspectives to provide a basis for comparison. A conceptual model of a Boeing 747 based on running cost – suddenly the engines are a hell of a lot more important than the drinks trolley. A conceptual model based on electrical systems. Another based on manufacturing cost. Another based on air traffic control systems and airport infrastructure considerations. And yes, producing all these models takes a lot more than arranging a week of interviews with people who are already on the payroll, which is why so many projects get as far as that high-level system diagram and then start delivering things.
And why, somewhere in your system you almost certainly have the software equivalent of a hundred-million-dollar drinks trolley.
Thank you for flying Analysis Airways.