Tuesday 18 September 2007

Improvement as opposed to Change

A link on the Agile CMMi blog had some very interesting things to say about a gentleman named Brian Lyons, the CEO (as well as CTO and founder) of Number Six, a Vienna, Virginia (Washington, DC area) software technology services/business consulting firm. Number Six quite obviously 'get' the concepts behind Agile CMMi, Hillel Glazer of the blog wrote. Interested, I flipped over to their site to hopefully learn a bit more, maybe drop off a CV.

Mr. Glazer wrote the blog entry in question on 25 July. The first thing you notice when viewing the Number Six site is the home-page obituary of and tribute to Brian Lyons, who died on (Monday) 3 September 2007 in a motorcycle accident. There are various links to family and tribute sites, a press release, and a request that "in lieu of flowers, the family has requested that donations be made to" a scholarship fund at the University of Maryland (excellent, particularly for those too far or too late to attend the funeral). I wish the family and company all the best in their times of grief and trouble.

The point I originally started writing this entry to make, however, was inspired by one of the bullets on Number Six' careers page, under the heading "Why Six?": "We are committed to consistent improvement, not continual drastic change."

Think about that distinction for a moment. Many of us, particularly those in the software-geek persuasion, start our careers trying to remake everything; not necessarily because it's broken, but so that we can do it (whatever it is), make it ours, introduce new techniques or technologies, and hopefully (but improbably) provide a better solution to the problem at hand than what was there before.

The forces arrayed against that impulse are formidable. Playing the role of (and often in actual fact) those who are by nature suspicious of any radical approach to a problem, too often dismissing out of hand any alleged improvement or innovation without "giving it a fair chance", in the eyes of the wet-behind-the-ears Young Turk. Occasionally, of course, improvements and innovations too beneficial to ignore do come out of this process.

As the developer matures (which may take days, decades or eternities), the Young Turk recognizes that not all of the hindrances were traditionalist per se; rather, they were operating from a different (usually business-oriented) set of priorities. Learning to understand and appreciate those priorities is one of the fundamental aspects of any successful transformation from ivory-tower Geek to journeyman software craftsman (or -woman), able to make a contribution to customers and to the craft of software development.

What becomes obvious, to all thoughtful participants and stakeholders in the process of developing and maintaining any software system is that a natural tension exists between three apparently conflicting ideas:

  • Don't change what works well enough when there are more pressing needs to attend to;
  • New capabilities, chosen and implemented properly, can have strongly beneficial effects (efficiency, productivity, wider customer scope, 'better' according to various aspects of quality);
  • Any change to an existing system involves direct and indirect costs which must be carefully evaluated and compared against the expected improvements.

Balancing and managing the conflict between those concepts throughout the lifetime of a piece of software is an art that has yet to be thoroughly and reliably mastered with a level of reliability and cost widely acceptable to business customers. Several philosophies and techniques (often marketed as technologies) have been developed and marketed as (attempted) solutions to that problem. Three that I have found extremely useful, in complementary ways, over the last few decades are agile development, refactoring and the CMMI. (Those already familiar with the concepts may skip down to the paragraph which begins "As argued by practitioners such as Hillel...".)

The CMMI (previously more widely and less precisely known as the CMM, for Capability Maturity Model) has been around for quite some time as a formal means of evaluating the maturity and coherence of an organization's development efforts (among other activities). It requires its users (organizations) to acquire and document ever-increasing detail of precisely how they go about the process of development and how they are required to be able to prove (in an internal or external audit) that those processes are being followed and any discrepancies noted and accounted for. This suffers as a standalone policy framework for software development on two grounds:

  • the "what" and "why" of development are completely ignored. CMMI fundamentally doesn't care if you're building a pile of junk that has serious practical problems, as long as you do it using the process you've defined for yourself.
  • The CMMI, along with US DOD Standard 2167A, have been blamed for the logging of more trees (to make paper) than virtually any other industry business practice. This is largely because both are seen (in CMMI's case, somewhat unfairly) as pushing "hidden mandates" for the rigid , never-far-from-obsolete "waterfall" model of development.

Agile development, on the other hand, is a survival response to both the (largely management-driven) mantra that "Real Artists Ship", and on the other hand, to seemingly interminable periods between "milestones" (particularly in the waterfall model) where nothing of intrinsic value is visible to an outside observer (e.g., management or customers). Its main criticism from opponents is that it produces too little documentation (the product is the documenatation, to a large degree).

An aspect of, or adjunct to, agile development is refactoring, pioneered and popularized in the Martin Fowler-written book. Refactoring is all about how you can make (sometimes radical) changes to the internals of a software system, provided you keep the external (customer-facing, revenue-generating) interfaces constant. Wrapping one's mind around this as a formalized process, as opposed to the "don't-break-what-works" mentality common to experienced developers (and managers), is a threshold experience in a software craftsman's progress.

As argued by practitioners such as Hillel, the combination of the process-centric CMMI and the solution-centric, visible-progress Agile set of processes (including, particularly, refactoring) gives the "best of both worlds" for development — a focus on artifacts and products created through a survivable process, matched with a demonstrated and documented knowledge of exactly what is meant by "that process" under current circumstances. This also implies demonstrating understanding of the specifics of "current circumstances" that drive the decision-making and artifact-development processes — which brings us back full circle to understanding why Agile is a good fit to begin with.

Unlike the Rational Unified Process (RUP), which grew out of a waterfall-driven management-oriented mentality (and the opportunity to sell software tools and consulting services to any shop using the heavily-marketed process), Agile and CMMI development can be successfully started after passing around a few books among the development and management staff. Tools and training are, chosen thoughtfully, extremely helpful, and the appraisal process within CMMI (formally SCAMPI, the well-known "Level 1" to "Level 5" assessment of process maturity) does eventually require an outside assessor, or registrar, to formally certify the organization. But to take a typical small development group from the customary initial chaos to a finely-tuned, market-leading, customer-satisfying machine, it has been my experience and observation that it is far easier (and more cost-effective) to implement CMMI in an Agile fashion than to go down the (vendor-specific) RUP route. Imposing either on a large organization would require an unlikely combination of managerial brilliance, sadism and masochism; revolutions boiling up from below and winning eventual management sanction have proven much more likely to be successful in that type of environment.

So what's a Young Turk (or a more seasoned craftsman) to make of all this blather? The simple point: Consistent improvement is practical, and without life- or career-threatening implications, achievable on a repeatable basis. Continual drastic change, in contrast and almost by definition, will lead the development group to many sleepless nights trying to get that last show-stopper bug fixed, management to wonder why those bozos in development can't be trusted to ship anything on time, and customers to wonder what they're really getting for their money beyond the hype and bling of the marketing materials. Which team would you rather be on?

Saturday 15 September 2007

Crap Is Not a Professional Goal

I recently, very briefly, worked with a Web startup based in Beijing (the "Startup"). The CEO of this company, an apparently very intelligent, focussed individual with great talent motivating sales and marketing people, takes as his guiding principle a quote from Guy Kawasaki, "Don't worry, be crappy".

The problem with that approach for the Startup, as I see it, is two-fold. To start, Kawasaki makes clear in his commentary that he's referring strictly to products that are truly innovative breakthroughs, exemplars of a whole new way of looking at some part of the world. Very few products or companies meet that standard (and even if yours does, Kawasaki declares, you should eliminate the crappiness with all possible speed). No matter how simultaneously useful and geeky the service offered by the Startup's site is, it is, at best, a novel and useful twist resting on several existing, innovative-in-their-day technologies. (A friend who I explained the company to commented that "it sounds as innovative as if Amazon.com only sold romance novels within the city limits of Boston" — hardly a breakthrough concept). Indeed, Kawasaki makes clear that he's talking about order-of-magnitude innovation; the examples he cites are the jump from daisy-wheel to laser printing and the Apple Macintosh.

The second, more insidious, problem with the approach, and the trap that many early-1990s Silicon Valley startups fell into, is that you take crappiness as a given, without even trying to deliver the one-two punch of true innovation and a sublimely well-engineered product that immediately raises the bar for would-be "me-too" copycats. (Sony, for instance, has traditionally excelled at this, as has the Apple iPod (learning from the mistakes Kawasaki cites for the earliest Macintosh). Deliver crap, and anybody can compete with you once they understand the basics of your product. The wireless mouse is a good example of this.

If you tell yourself from the get-go that you'll be satisfied if you ship a 70%-quality product, what will happen is that, as time goes by, that magical 70% becomes 50%, then 30%, then whatever it takes to meet the date you told the investors. And if management doesn't trust engineering to give honest, realistic estimates (as is typical in software and pandemic in startups), you have a recipe for disaster: engineering takes a month to come back with an estimate that development will take 12-18 months; management hears '12' and automatically cuts that to 6 and pushes to have a "beta" out in 4. The problem is that, if you're dealing with an even marginally innovative product, things are not cut-and-dried; the engineers will have misunderstood some aspects of the situation, underestimated certain risks, and been completely blind to others. This was pithily summed up, in another field entirely, by Donald Rumsfeld:

There are known "knowns." There are things we know that we know. There are known unknowns. That is to say there are things that we now know we don't know. But there are also unknown unknowns. There are things we don't know we don't know. So when we do the best we can and we pull all this information together, and we then say well that's basically what we see as the situation, that is really only the known knowns and the known unknowns. And each year, we discover a few more of those unknown unknowns.

Companies that simultaneously forget the ramifications of this while taking too puffed-up a view of themselves are leaving themselves vulnerable to delivering nothing more useful or profitable than the old pets.com (not the current PetSmart) sock puppet — and probably nothing that memorable, either.

And that further assumes that they don't fall prey to preventable disasters like losing the only hard drive running a production server built with non-production, undocumented software. If Web presence defines a "Web 2.0" company in the eyes of its customers and investors, going dark could be very costly indeed.