Showing posts with label automated testing. Show all posts
Showing posts with label automated testing. Show all posts

Sunday, 6 May 2012

Rules can be broken, but there are Consequences. Beware.

If you're in an agile shop (of whatever persuasion), you're quite familiar with a basic, easily justifiable rule:

No line of production code shall be created, modified or deleted in the absence of a failing test case, for which the change is required to make the test case pass.

Various people add extra conditionals to the dictum (my personal favourite is "…to make the test case pass with the minimal, most domain-consistent code changes currently practicable") but, if your shop is (striving towards being) agile, it's very hard to imagine that you're not honouring that basic dictum. Usually in the breach at first, but everybody starts in perfect ignorance, yes?

I (very) recently was working on a bit of code that, for nearly its entire existence over several major iterations, had 100% test coverage (technically, C0 or line coverage) and no known defects in implemented code. It then underwent a short (less than one man-week) burst of "rush" coding aimed at demonstrating a new feature, without the supporting tests having been done beforehand. It then was to be used as the basis for implementing a related set of new features, that would affect and be affected by the state of several software components.

That induced some serious second-guessing. Do we continue mad-hatter hacking, trusting experience and heroic effort to somehow save the day? Do we go back and backfill test coverage, to prove that we understand exactly what we're dealing with and that it works as intended before jumping off into the new features (with or without proper specs/tests up front)? Or do we try to take a middle route, marking the missing test coverage as tech debt that will have to be paid off sometime in The Glorious Future To Come™? The most masochistic of cultists (or perhaps the most serenely confident of infinite schedule and other resources) would pick the first; the "agile" cargo-cultist with an irate manager breathing fire down his neck the second; but the third is the only pragmatic hope for a way forward… as long as development has and trusts in assurances that the debt will be paid in full immediately after the current project-delivery arc (at which time increased revenue should be coming in to pay for the developer time).

The moral of the story is well-known, and has been summed up as "Murphy " (of Murphy's Law fame) "always gets his cut", "payback's a b*tch" and other, more "colourful" metaphors. I prefer the dictum that started this post, perhaps alternately summed up as

Don't point firearms at bits of anatomy that you (or their owner) would mind losing. And, especially, never, ever do it more than once".

Because while even the most dilettante of managers is likely to have heard Fred Brooks' famous "adding people to a late project only makes it later" or his rephrasing as "nine women cannot have a baby in one month", too few know of Steve McConnell's observation (from 1995!) that aggressive schedules are at least equally delay-inducing. If your technical people say that, with the scope currently defined, that something will take N man-weeks, pushing for N/4 is an absolutely fantastic way to turn delivery in N*4 into a major challenge.

Remember, "close" only counts in horseshoes and hand grenades and, even then, only if it's close enough to have the intended effect. Software, like the computers it runs on, is a binary affair; either it works or it doesn't.

Saturday, 8 May 2010

She's Putting Me Through Changes...

...they're even likely to turn out to be good ones.

As you may recall, I've been using and recommending the Kohana PHP application framework for some time. Kohana now offer two versions of their framework:

  • the 2.x series is an MVC framework, with the upcoming 2.4 release to be the last in that series; and

  • the 3.0 series, which is an HMVC framework.

Until quite recently, the difference between the two has been positioned as largely structural/philosophical; if you wished to develop with the 'traditional' model-view-controller architecture, then 2.x (currently 2.3.4) is what you're after; with great documentation and tutorials, any reasonably decent PHP developer should be able to get Real Work™ done quickly and efficiently. Oh the other hand, the 3.0 (now 3.0.4.2) offering is a hierarchical MVC framework. While HMVC via 3.0 offers some tantalising capabilities, especially in large-scale or extended sequential development, there remains an enthusiastic, solid community built around the 2.3 releases.

One of the long-time problems with 2.3 has been how to do unit testing? Although vestigial support for both a home-grown testing system and the standard PHPUnit framework exists in the 2.3 code, neither is officially documented or supported. What this leads to is a separation between non-UI classes, which are mocked appropriately and tested from the 'traditional' PHPUnit command line, and UI testing using tools like FitNesse. This encourages the developer to create as thin a UI layer as practical over the standalone (and more readily testable) PHP classes which that UI layer makes use of. While this is (generally) a desirable development pattern, encouraging and enabling wider reuse of the underlying components, it's quite a chore to get an automated testing/CI rig built around this.

But when I came across a couple of pages like this one on LinkedIn (free membership required). This thread started out asking how to integrate PHPUnit with Kohana 2.3.4, and then described moving to 3.0 as

I grabbed Kohana 3, plugged in PHPUnit, tested it, works a treat! So we're biting the bullet and moving to K3! :)

I've done a half-dozen sites in Kohana 2.3, as I'd alluded to earlier. I've just downloaded KO3 and started poking at it, with the expectation to move my own site over shortly and, in all probability, moving 3.0 to the top of my "recommended tools" list for PHP.

Like the original poster, Mark Rowntree, I would be interested to know if and how anybody got PHPUnit working properly in 2.3.4.

Thanks for reading.

Wednesday, 14 April 2010

Process: Still 'garbage in, garbage out,', but...

...you can protect yourself and your team. Even if we're talking about topics that everybody's rehashed since the Pleistocene (or at least since the UNIVAC I).

Traditional, command-and-control, bureaucratic/structured/waterfall development process managed to get (quite?) a few things right (especially given the circumstances). One of these was code review.

Done right, a formal code review process can help the team improve a software project more quickly and effectively than ad-hoc "exploration and discovery" by individual team members. Many projects, including essentially all continuing open-source projects that I've seen, use review as a tool to (among other things) help new teammates get up to speed with the project. While it can certainly be argued that pair programming provides a more effective means to that particular end, they (and honestly, most agile processes) tend to focus on the immediate, detail-level view of a project. Good reviews (including but not limited to group code reviews) can identify and evaluate issues that are not as visibly obvious "down on the ground." (Cédric Beust, of TestNG and Android fame, has a nice discussion on his blog about why code reviews are good for you.

Done wrong, and 'wrong' here often means "as a means of control by non-technical managers, either so that they can honour arbitrary standards in the breach or so that they can call out and publicly humiliate selected victims," code reviews are nearly Pure Evil™, good mostly for causing incalculable harm and driving sentient developers in search of more humane tools – which tend (nowadays) to be identified with agile development. Many individuals prominent in developer punditry regularly badmouth reviews altogether, declaring that if you adopt the currently-trendy process, you won't ever have to do those eeeeeeeeevil code reviews ever again. Honest. Well, unless.... (check the fine print carefully, friends!)

Which brings us to the point of why I'm bloviating today:

  1. Code reviews, done right, are quite useful;

  2. Traditional, "camp-out-in-the-conference-room" code reviews are impractical in today's distributed, virtual-team environment (as well as being spectacularly inefficient), and

  3. That latter problem has been sorted, in several different ways.

This topic came up after some tortuous spelunking following an essentially unrelated tweet, eventually leading me to Marc Hedlund's Code Review Redux... post on O'Reilly Radar (and then to his earlier review of Review Board and to numerous other similar projects.

The thinking goes something like, Hey, we've got all these "dashboards" for CRM, ERP, LSMFT and the like; why not build a workflow around one that's actually useful to project teams. And these tools fit the bill – helping teams integrate a managed approach to (any of several different flavours of) code review into their development workflow. This generally gets placed either immediately before or immediately after a new, or newly-modified, project artifact is checked into the project's SCM. Many people, including Beust in the link above, prefer to review code after it's been checked in; others, including me, prefer reviews to take place before checkin, so as to not risk breaking any builds that pull directly from the SCM.

We've been using collaborative tools like Wikis for enough years now that any self-respecting project has one. They've proven very useful for capturing and organising collective knowledge, but they are not at their best for tracking changes to external resources, like files in an SCM. (Trac mostly finesses this, by blurring the lines between a wiki, an SCM and an issue tracker.) So, a consensus seems to be forming, across several different projects, that argues for

  • a "review dashboard," showing a drillable snapshot of the project's code, including completed, in-process and pending reviews;

  • a discussion system, supporting topics related to individual reviews, groups of reviews based on topics such as features, or the project as a whole; these discussions can be searched and referenced/linked to; and

  • integration support for widely-used SCM and issue-tracking systems like Subversion and Mantis.

Effective use of such a tool, whatever your process, will help you create better software by tying reviews into the collaborative process. The Web-based versions in particular remove physical location as a condition for review. Having such a tool that works together with your existing (you do have these, yes?) source-code management and issue-tracking systems makes it much harder to have code in an unknown, unknowable state in your project. In an agile development group, this will be one of the first places you look for insight into the cause of problems discovered during automated build or testing, along with your SCM history.

And if you're in a shop that doesn't use these processes, why not?


On a personal note, this represents my return to blogging after far, far too long buried under Other Stuff™. The spectacularly imminent crises are now (mostly, hopefully) put out of our misery now; you should see me posting more regularly here for a while. As always, your comments are most welcome; this should be a discussion, not a broadcast!

Wednesday, 10 February 2010

NIH v. An Embarrassment of Riches

One thing most good developers learn early on is not to "reinvent" basic technology for each new project they work on, The common, often corporate, antithesis to this is NIH, or "Not Invented Here." But sometimes, it's hard to decide which "giants" one wants to "stand on the shoulders of."

I've recently done a couple of mid-sized Web projects using PHP and the Kohana framework. A framework, as most readers know, is useful a) by helping you work faster b) by including a lot of usually-good code you don't have to write and maintain (but you should understand!). Good frameworks encourage you to write your own code in a style that encourages reuse by other projects that use the same framework.

One task supported by many frameworks is logging. There have also been many "standalone" (i.e., not integrated into larger systems) logging packages. The most well-known of these, and the source of many derivatives, is the Apache log4j package for Java. This has been ported, also as an Apache project, is log4php.

Log4php has saved me countless hours of exploratory debugging. I stand firmly with the growing group of serious developers who assert that if you use a reasonably agile process (with iterative, red, green, refactor unit testing) and make good use of logging, you'll very rarely, if ever, need a traditional debugger.

What does this have to do with Kohana? Well, Kohana includes its own relatively minimalist, straightforward logging facility (implemented as static methods in the core class, grumble, grumble). There's a standard place for such logs to be written to disk, and a nice little 'debug toolbar' add-on module that lets you see logging output while you're viewing the page that generated it.

So I ought to just ditch log4php in favor of the inbuilt logging system when I'm developing Kohana apps, right? Not so fast...

Log4php, as does log4j, has far more flexibility. I can log output from different sources to different places (file, system log, console, database, etc.), have messages written to more than one place (e.g., console and file), and so on. Kohana's logging API is too simple for that.

With log4php, I have total control over the logging output based on configuration information stored in an external file, not in the code itself. That means I can fiddle with the configuration during development, even deploy the application, without having to make any code changes to control logging output. The fewer times I have to touch my code, the less likely I am to inadvertently break something. Kohana? I only have one logging stream that has to be controlled within my code, by making Kohana method calls.

Many experienced developers of object-oriented software are uncomfortable with putting more than one logical feature into a class (or closely-related set of classes). Why carry around overhead you don't use, especially when your framework offers a nice extension capability via "modules" and "helpers"?. While there may sometimes be arguments for doing so (the PHP interpreter is notoriously slow, especially using dynamic features like reflection), I have always failed to understand how aggregating large chunks of your omniverse into a Grand Unified God Object™ pays dividends over the life of the project.

So, for now, I'll continue using log4php as a standalone tool in my various PHP development projects (including those based on Kohana). One thing that just went onto my "nice to do when I get around to it" list is to implement a module or similar add-on that would more cleanly integrate log4php into the surrounding Kohana framework.

This whole episode has raised my metaphorical eyebrow a bit. There are "best practices" for developing in OO (object-oriented) languages; PHP borrows many of these from Java (along with tools like log4php and PHPUnit, the de facto standard unit-test framework). I did a fairly exhaustive survey of the available PHP frameworks before starting to use Kohana. I chose it because it wasn't a "everything including several kitchen sinks" tool like Zend, it wasn't bending over backwards to support obsolete language misfeatures left over from PHP 4, and it has what looks to be a pretty healthy "community" ecosystem (unlike some once-heavily-flogged "small" frameworks like Ulysses). I'm not likely to stop using Kohana very soon. I may well have to make time to participate in that community I mentioned earlier, if for no other reason that to better understand why things are the way they are.

But that's the beauty of open source, community-driven development, surely?

Thursday, 6 August 2009

Smokin' Linux? Roll Your Own!

As people who've encountered the "business end" of Linux have known for some time, the system (in whichever distribution you prefer, greatly rewards (some would say 'requires') tinkering and customisation. This can be done with any Linux system, really; some distros, like LinuxFromScratch and, to a lesser degree, Gentoo and its derivatives, explicitly assume that you will be customizing their base system according to your specific needs.

Now the major distros are getting into it. There have been various Ubuntu and Fedora customisation kits on the Net, but none as far as I can tell that are as directly supported (or easy to use) as from OpenSUSE, the "community-supported" offering from Novell (who also offer SUSE Linux Enterprise Desktop and Server.

Visit the OpenSUSE site, and prominently visible is a link to the OpenSUSE Build Service, which "allows developers to package software for all major Linux distributions", or at least those that use rpm packaging, the packaging system used by Red Hat, Mandriva, CentOS, and other similar systems. But that's not all...

SUSE now have a new service, SUSE Studio, which allows users to create highly customized systems based on either the community (OpenSUSE) or enterprise versions of SUSE Linux. These "appliances" can be put together on the basis of "patterns", such as lamp_server (LAMP, or Linux/Apache/MySQL/PHP Web server) or technical_writing (which includes numerous tools like Docbook). You can even supply your own (either self-built or acquired elsewhere) RPM packages to include in the appliance you're building, and SUSE Studio will deal with the dependency matching (warning you if packages are required that aren't either among its standard set or uploaded by you).

Startup scripts, networking, basically anything that is usually handled through the basic installation or post-installation configuration - all can be configured within the SUSE Studio Web interface.

And then, when you've got your system just the way you want it, you can build it as either an ISO (CD/DVD) image to be downloaded and burned onto disc, or as a VM image for two of the most popular VM systems (VMWare and Xen).

But wait, there's more...

Using a Flash-enabled browser, you can even "test drive" your appliance, testing it while running (transparently) in an appropriate VM hosted within the SUSE Studio infrastructure. Especially if you have a relatively slow connection, this will let you do preliminary "smoke testing" without having to download the actual image to your local system. Once you're ready to do so, of course, downloading is very nearly a single-click affair. Oh, and you're given (presently) 15 GB of storage for your various builds - so you can easily do comparative testing.

What don't I like about it? In the couple of hours I've been messing around with it today, there's really only one nagging quibble: When you do the "test drive" of your new creation, the page you're running it in is a standard, non-secure http Web page. The page warns you that any data and keystrokes sent will not be encrypted, and recommends the use of ssh if that is a concern (by which most people will think https). But there's no obvious way to switch, and shutting down the running appliance (which starts by the time you read the warning) involves keystrokes and so on...

In fairness, this is still very clearly a by-invitation beta offering (but you can ask for an invite), and some rough edges are to be expected. I'm sure I'll run into another one or two as things go on. I'm equally certain that all the major problems will be smoothed out before SUSE Studio goes into general public availability.

So, besides the obvious compulsive hackers and the people building single-purpose appliance-type systems, who would really make use of this?

One obvious use case, which the SUSE Studio site describes, is as a canned demo of a software system. If you're an ISV, you can add your software or Web app to a SUSE Studio appliance, lock down the OS image to suit (encrypting file systems and so on), and hand out your discs at your next trade show (or have them downloadable from your Website). No worries about installing or uninstalling from prospective customers' systems; boot from the CD (or load it into a VM) and they're good to go.

Another thought that hit me this morning was for use as an interview filter. This can be in either of two distinct modes. First, you might be looking for people who are really familiar with how Linux works. Write up the specs of a SUSE Studio appliance (obviously more demanding than just the click-and-drool interface) and use an app of your own devising to validate the submitted entries. This validation could be automated in any of several ways.

The second possible interview filter would be as a programming/Web dev system. As a variation on the "ISV" example above, you load up an appliance with a set of tools and/or source files, ready to be completed and/or fixed by your candidates. They start up the appliance (either a live CD or VM), go through your instructions for the test, and then submit results (probably an encrypted [for authentication] archive of all the files they've touched, as determined by the base system tools) via email or FTP. On your end, you have a script that unpacks the submission into a VM and uses the appropriate automated testing tools to validate it. I can even see this as a business model for someone who offers this capability as a service to companies wishing to have a better filter for prospective candidates than resume-keyword matching - which as we all know is practically useless due to the high number of both false negatives and false positives.

What do you all think?

Sunday, 26 March 2006

On the importance of keeping current

Now that PHP 6 is in the works, there is even less excuse than existed previously for Web sites (hosting providers in particular) not migrating to PHP 5 from PHP 4. We are faced with the unpleasant possibility for tool and library developers of having to support three major, necessarily incompatible, versions of PHP.

I am not yet up to speed on what PHP 6 is going to bring to the table, but PHP 5 (which will be two years old on 13 July 2006) makes PHP a much more pleasant, usable language for projects large and small. With a true object model, access control, exception handling, improved database support, improved XML support, proper security design concepts, and so on, it's a far cry from the revised-nearly-to-the-point-of-absurdity PHP 4.

Another great thing about PHP 5, if not strictly part of it, is the PHPUnit unit testing framework (see also the distribution blog). This is a wonderful tool for unit testing, refactoring, and continuous automated verification of your codebase. It will strongly encourage you to make your development process more agile, using a test first/test everything/test always mindset that, once you have crossed the chasm, will benefit a small one- or two-man shop at least as much as the large, battalion-strength corporate development teams that have to date been its most enthusiastic audience.

I have so far used this tool and technique for three customer projects: the first was delivered (admittedly barely) on time, the second was actually deliverable less than 2/3 of the scheduled calendar time into the project (allowing for further refactoring to improve performance) and delivered on time, and the third was delivered 10% ahead of time, with no heroic kill-the-last-bug all-night sessions required.

Discussing the technique with other developers regarding its use in PHP and other languages (such as Python, Ruby, C++ and of course Java; the seminal "JUnit" testing framework was written for Java), gives the impression that this experience is by no means unique or extreme (nor did I expect it to be). Given that two of my three major career interests for the last couple of decades have been rapid development of high-quality code and the advancement of practices and techniques to help our software-development craft evolve towards a true engineering discipline, this would seem a natural thing for me to get excited and evangelical about. (The third, in case you're wondering, is the pervasive use of open standards and non-proprietary technologies to help focus efforts on true innovation).

All of this may seem a truly geeky thing to rave about, and to a certain degree, I plead guilty of that. But it should also be important, or at least noteworthy, to anybody whose business or casual interests involve the use of software or software-controlled artifacts like elevators and TiVo. By understanding a little bit about how process and quality interact, clients, customers and the general-user public can help prod the industry towards continuous improvement.

Because, after all, "blue screens" don't "just happen".

Thursday, 27 October 2005

Craft, culture and communication

This is a very hard post for me to write. I've been wrestling with it for the last two days, and yes, the timestamp is accurate. If I offend you with what I say here, please understand that it is not meant to be personal. Rather, it probably means you may want to pay close attention.

When I was in university, back in the Pleistocene, I had a linguistics professor who went around saying that

A language is the definition of a specific culture, at a specific place, at a specific time. Change the culture, the place or the time, and the language changes — and if the language changes, it means that something else has, too.
Why is this relevant to the craft of software development?

Last weekend, I picked up a great book, Agile Java™: Crafting Code with Test-Driven Development, over the weekend at Kinokuniya bookstore at KLCC. There are maybe half a dozen books that any serious developer recognises as landmark events in the advancement of her or his craft. This, ladies and gentlemen, is one of them. If you are at all interested in Java, in high-quality software development, or in managing a group of software developers under seemingly impossible schedules, and if you are fully literate in the English language as a means of technical communication, then bookmark this page, go grab yourself a copy, read it, come back, and reread it tomorrow. It's not perfect — I would have liked to see the author use TestNG as the test framework rather than its predecessor, JUnit) but those are more stylistic quibbles than substance; if you go through the lessons in this book, you will have some necessary tools to improve your mastery of the craft of software development, specifically using the Java language and platform.

I immediately started talking up the book to some of my projectmates at Cilix, saying "You gotta learn this".

And then I stoppped and thought about it some more. And pretty much gave up the idea of evangelising the book — even though I do intend to lead the group into the use of test-driven development. It is the logical extension of the way I have been taught (by individuals and experience) to do software development for nearly three decades now. It completely blew several of the premises I was building a couple of white papers on completely away — and replaced them with better ones (yes, Linda, it's coming Real Soon Now). TDD may not solve all your project problems, cure world poverty or grow hair on a billiard ball, but it will significantly change the way you think about — and practise — the craft of software development.

If you understand the material, that is.

There are really only three (human) languages that matter for engineering and for software: English, Russian and (Mandarin) Chinese, pretty much in that order. Solid literacy and fluency in Business Standard English and Technical English will enable you to read, comprehend and learn from the majority of technical communication outside Eastern Europe and China (and the former Soviet-bloc engineers who don't already know English are learning it as fast as they can). China was largely self-reliant in terms of technology for some time, for ideological and economic reasons; there's an amazing (to a Westerner) amount of technical information available in Chinese — but English is gaining ground there too, if initially often imperfect in its usage.

Coming back to why my initial enthusiasm about the book has cooled, for those of you who aren't actually from my company, I work at an engineering firm in Kuala Lumpur, Malaysia called Cilix. We do a lot of (Malaysian) government contract work in various technical areas, but we are also trying to grow a commercial-software (including Web applications) development group. Until recently, I managed that group; after top management came to its senses, I am now in an internal-consulting role. As Principal Technologist, I see my charter as consulting to the various groups within the Company on (primarily) software-related and development-related technologies, techniques, tools and processes, with a view to make our small group more effective at competition with organisations hundreds of times our size.

Up to now, we've been in what a Western software veteran would recognise as "classic startup mode": minimal process, chaotic attempts at organisation, with project successes attained through the heroic efforts of specific, talented individuals. My job is, in part, to help change that: to help us work smarter, not harder. Enter test-driven development, configuration management, quality engineering, and documentation.

Documentation. Hmmm. Oops.

One senior manager in the company recently remarked that there are perhaps five or six individuals in the entire company with the technical abilities, experience and communication abilities to help pull off the type of endeavour — both in terms of the project and how we go about it. Two or at most three of those individuals, to my knowledge, are attached to the project, and one of these is less than sanguine about the currency of technical knowledge and experience being brought to bear.

Since arriving on the project, I have handed two books to specific individuals, with instructions to at lesat skim them heavily and be able to engage in a discussion of the concepts presented in a week to ten days' time. Despite repeated prodding, neither of those individuals appeared to make that level of effort. This is not to complain specifically about the individuals; informally asking developers within the group how many technical books they had read in the last 18 months averaged solidly in the single digits. A similar survey taken in comparable groups at Microsoft, Borland, Siemens Rolm or Weyerhaeuser — all companies where I have worked previously — would likely average in the mid-twenties at least. So too, I suspect, would surveys at Wipro, Infosys or PricewaterhouseCoopers, some of our current and potential competitors.

While American technical people are rightly famous for living inside their own technical world and not getting out often enough, that provides only limited coverage as an excuse. In a craft whose very raison d'ètre is information, an oft-repeated truism (first attributed, to my knowledge, to Grace Hopper, that "90% of what you know will be obsolete in six months; 10% of what you know will never be obsolete. Make sure you get the full ten percent." If you don't read — both books and online materials — how can a software (or Web) developer have any credible hope of remaining current or even competent at his or her craft?

That principle extends to organisations. If the individual developers do not exert continuous efforts to maintain their skills (technical and linguistic) at a sufficiently high level, and their employer similarly chooses not to do so, how can that organisation remain competitive over the long term, when competitiveness may be directly linked to the efficiency and effectiveness with which that organisation acquires, utilises and expands upon information — predominantly in English? How can smaller organisations compete against larger ones which are more likely to have the raw manpower to scrape together a team to accomplish a difficult, leading-edge project? "Learn continuously or you're gone" was an oft-repeated mantra from business and industry participants in a recent Software Development Conference and Expo, an important industry conference. What of the individuals or organisations who choose not to do so?

Those of us involved in the craft of software and Web development have an obvious economic and professional obligation to our own careers to keep our own skills current. We also have an ethical, moral (and in some jurisdictions, fiduciary, legal) obligation to encourage our employers or other professional organisations to do so. There is no way of knowing whether, or how successfully, any given technology, language or practise will be in ten years' time, or even five. How many times has the IT industry been rocked by sudden paradigm shifts — the personal computer, the World Wide Web — which not only created large new areas of opportunity, but severely constrained growth in previously lucrative areas? I came into this industry at a time when (seemingly) several million mainframe COBOL programmers were watching their jobs go away as business moved first to minicomputers, then to PCs. History repeated itself with the shift to graphical systems like the Apple Macintosh and Microsoft Windows, and again with the World Wide Web and the other Internet-related technologies, and yet again with the offshoring craze of the last five years. What developer, or manager, or director, has the hubris to in effect declare that it won't happen again, that there own't be a new, disruptive technology shift that obsoletes skills and capabilities?

But whatever shift there is, whatever new technology comes along that turns college dropouts into megabillionaires, that changes the professional lives of millions of craftspeople... it will almost certainly be documented in English.

Tuesday, 18 October 2005

The Vision, Forward through the Rear-View Mirror

The vision I'm trying to promote here, which has been used successfully many times before, is that of a very flexible, highly iterative, highly automated development process, where a small team (like ours) can produce high-quality code rapidly and reliably, without burning anybody out in the process. (Think Agile, as a pervasive, commoditized process.) Having just returned (17 October) from being in hospital due to a series of small strokes, I'm rather highly motivated to do this personally. It's also the best way I can see for our small team to honour our commitments.

To do this, we need to be able to:

  • have development artifacts (code/Web pages) integrated into their own documentation, using something like Javadoc;
  • have automated build tools like Ant regularly pull all development artifacts from our software configuration management tool (which for now is subversion)
  • run an automated testing system (like TestNG) on the newly built artifacts;
  • add issue reports documenting any failed tests to our issue tracking system, which then crunches various reports and
  • automatically emails relevant reports to the various stakeholders.

The whole point of this is to have everybody be able to come into work in the morning and/or back from lunch in the afternoon and know exactly what the status of development is, and to be able to track that over time. This can dramatically reduce delays and bottlenecks from traditional flailing about in a more ad hoc development style.

Obviously, one interest is test-driven development, where, as in most so-called Extreme Programming methods, all development artifacts (such as code) are fully tested at least as often as the state of the system changes. What this means in practice is that a developer would include code for testing each artifact integrated with that artifact. Then, an automated test tool would run those tests and report to the Quality Engineering team any results. This would not eliminate the need for a QE team; it would make the team more effective by helping to separate the things which need further exploration from the things that are, provably, working properly.

Why does this matter? For example, there was an article on test-driven development in the September 2005 issue of IEEE Computer (reported on here) that showed one of three development groups reducing defect density by 50% after adopting TDD, and another similar group enjoying a 40% improvement.

All this becomes interesting to us at Cilix when we start looking at tools like:

  • Cobertura for evaluating test coverage (the percentage of code accessed by tests)
  • TestNG, one successor to the venerable JUnit automated Java testing framework. TestNG is an improvement for a whole variety of reasons, including being less intrusive in the code under test and having multiple ways to group tests in a way that makes it much harder for you to forget to test something;
  • Ant, the Apache-developed, Java-based build tool. This is, being Java-based and not interactive per se, easy to automate;
  • and so on, as mentioned earlier.