Monday 17 August 2009

We Interrupt This Program...

We interrupt this tutorial to interject an observation about goals, methods and promises. Goals we have for ourselves as people and as professionals; the method we use to pursue those dreams; perhaps most importantly, the promises we make, both to ourselves and to our customers, about what we're doing and why.

I consider this after reading the Website for some consultants who've done some (relatively nice, relatively low-key) link-dropping on LinkedIn. I'm not naming them here, not because I don't want to draw attention to them (their own site is very clean and well-done), but because the point that I'm going to be making here isn't just limited to them - we, as a craft and an industry of Web development, have some Serious Problems™.

The "problem" is nicely summarized by this group's mission statement:

Our mission is to produce the perfect implementation of your dreams.

What could possibly be wrong with that?

As a goal, implied but left unspoken, absolutely nothing; both as practitioners and as clients, we tend to set ever-higher goals for ourselves. Indeed, that's the only way the "state of the art" - any "art" - can advance. But we who practice the art and craft of software (including Web) development (as opposed to the engineering discipline of hardware development) have a history of slashed-beyond-reality schedules and budgets coupled with a tendency for stakeholders not to hear "if all goes well" as a condition to our latest schedule estimate. We have a history, perceived and actual, of promising more than we can deliver. Far more attention is paid by non-technical people to the "failures" and "broken promises" of software than to things done right. For a craft whose work is accruing increasing public-policy and -safety implications, the effect of unrealistic expectations, brought about by poor communication and technical decisions being made by people who aren't just technically ignorant but proud of the fact, is disturbing. What started as a slow-motion train wreck has now achieved hypersonic speeds, and represents a clear and present danger to the organisational health and safety of all stakeholders.

I don't mean to say that projects always fail, but an alarming number of them do. If, say, dams or aircraft were built with the same overall lack of care and measurable engineering precision that is the norm in commercial software development, we'd have a lot more catastrophic floods, and a lot few survivors fleeing the deluge by air. When I entered this craft thirty years ago (last May), I was soon led to believe that we were thirty to fifty years away from seeing a true profession of "software engineering". As a time frame beginning now, in 2009, I now think that is almost laughably optimistic.

Why have things gotten worse when we as a tool-building and -using society need them to get better? Some people blame "The Microsoft Effect" - shipping software of usually-dubious quality to consumers (as opposed to 'customers') who have bought into the (false) idea that they have no realistic choice.

It's more pervasive than that; commercial software development merely reflects the fashion of the business "community" that supports it, which has bought into one of the mantras of Guy Kawasaki's "The Art of Innovation", namely "don't worry, be crappy." Not that Kawasaki is giving bad advice, but his precondition is being ignored just as those of other software people have been: the key sentence in his "don't worry, be crappy" paragraph is "An innovator doesn't worry about shipping an innovative product with elements of crappiness if it's truly innovative" (emphasis mine). In other words, if you really are going to change the world, nobody will notice if your Deus ex Machina 1.0 has clay feet as long as you follow up quickly with a 1.1 that doesn't...and follow that with a 2.0 that changes the game again. But that space between 1.0 and 1.1 has to be fast, Kawasaki argues (in the next paragraph, titled "Churn, Baby, Churn"), and the version after that has to come along before people (like possible competitors) start saying things like "well, he just brought out 1.1 to fix the clay feet in 1.0." If the customers see that you're bringing out new versions as fast as they can adapt to the previous ones, but that each new version is a vastly superior, revelatory experience compared to the earlier release that they were already delighted by, they'll keep giving you enough money for you to finish scaling the "revolutionary" cliff and take a (brief) rest with "evolutionary" versions. Business has not only forgotten how important that whole process is to their continued survival, but they've removed the capability for their bespoke software (and Web) infrastructure to use and reuse that model. All that remains is "it's ok if we ship crap; so does everybody else." That's the kind of thinking that made General Motors the world-bestriding Goliath it is today - as opposed to the wimpy also-ran it was (emphatically NOT) half a century ago. We really don't need any more businesses going over that sort of cliff.

What we do need, and urgently, are two complementary, mutually dependent things. We need a sea change in the attitude of (most) businesses, even technology businesses, towards software - to realise and acknowledge that the Pointy-Haired Boss is not merely a common occurrence in the way business manages software, but actively threatens the success of any project (and business) so infested. Just as businesses at some point realise that "paying any price to cut costs" is an active threat to their own survival, they need to apply that reality to their view of and dealings with the technical infrastructure that increasingly enables their business to function at all.

Both dependent on that and as an enabler of that change, the software and Web development industry really needs to get its house in order. We need to get away from the haphazard by-guess-and-by-golly estimation and monitoring procedures in use by the majority of projects (whose elaborate Microsoft Project plans and PowerPoint decks bear less and less resemblance to reality as the project progresses) and enforce use of the tools and techniques that have been proven to work, and have an organised, structured quest to research improvements and New Things.. Despite what millions of business cards and thousands of job advertisements the world over proclaim, there is no true discipline of "software engineering", any more than there was "oilfield engineering" in widespread use before the New London School explosion of 1937. Over 295 people died in that blast; we have software-controlled systems that, should they fail, could in fact hurt or kill many more - or cause significant, company- or industry-ruinous physical damages. We should not wait for such an event before "someone" (at that point, almost certainly an outside governmental or trans-governmental entity) says "These are the rules." While I understand and agree with the widespread assertion that certification tests in their present form merely demonstrate an individual's capability to do well on such tests, we do need a practical, experiential system - probably one modelled on the existing systems for engineering, law or medicine. Not that people should work 72-hour shifts; there's enough of that already. But rather that there should be a progression of steps from raw beginner to fully-trusted professional, with a mix of educational and experiential ingredients to ascend that progression, and continuing educational and certificating processes throughout one's entire career. The cost for this is going to have to be accepted as "part of the system" by business; if business wants properly competent engineers, and not just the latest boatload of unknowns with mimeographed vendor certs, then they're going to have to realize that that benefit does not come without cost to all sides. The free ride is over - for all the stakeholders at the table.

No comments: