Showing posts with label tdd. Show all posts
Showing posts with label tdd. Show all posts

Tuesday, 27 August 2013

When Development DOES Go Horribly Wrong™

I just finished watching Ian Cooper's talk at NDC Oslo, TDD, where did it all go wrong. If you're doing any flavour of TDD (BDD, etc.), this will likely be one of the more important hours you spend this year. Ian helped me wrap my head around the details of what some of those voices screaming in the back of my mind as I (and likely you) write code, were trying to make me understand.

Bear in mind that I've been doing test-driven development for a decade, and some form of behaviour-driven, outside-in development for perhaps half that. I was once known as a "crazy blue-sky hacker", who composed symphonic-length pieces of software at a single sitting. (I'm explicitly disclaiming anything about the quality of that software; while it may have seemed sufficiently fit for purpose at the time, there almost certainly were bugs aplenty waiting the hapless maintainer.)

One of Ian's main points in the talk, if not the main point, is that test-driven development (and its successors such as BDD) should test behaviours of the system, not how every little internal method behaves. Lots of great insights here, from how to be more effective with fewer tests, to "write unit tests that focus on behaviours and thus can be used for acceptance". More bluntly, "the reason to test is a new behaviour, not a method or a class." I'm as guilty of that as anybody.

That last has been something I've been trying to work toward for well over a year. Unfortunately, one of the poisonously bad habits I've picked up working in Rails is the idea that there's a one-for-one mapping of specs/tests to implementation classes and methods (model, controller, helper, etc.). These are the trees that keep me from seeing the forest of what I'm ultimately trying to accomplish, and more often than not, have me polishing the nodules on each root of each tree.

One of the things that comes out of this understanding for Rails or, to be fair, for any framework in any language that focuses more on implementation classes than expressed behaviours, is that I for some time now have avoided use of the generators, particularly scaffolds, for core application components (models, controllers, views, etc). My preferred workflow has evolved to something like what I list below. Note that this is my existing workflow, and does not yet incorporate the new understanding of proper TDD.

  1. I'll usually do a "first whack at" a model spec and bare-bones model, just to have something "specific" to work with in the later steps. I am beginning to see the way I have been doing this as a bad habit, particularly in "traditional" Rails. A Rails model should be concerned with persistence and validation, and an absolutely minimal amount beyond that. What I really want to start out with is my initial idea of a "business" domain object, which is Not The Same Thing;
  2. I'll write what in the Rails world is called a "feature spec" and most of the rest of the omniverse knows as some form of "integration spec", modelling how a specific feature works. ("As a Member, I want to create a new BlogPost, to begin a discussion about a topic.") The initial "high-level" spec may or may not evolve into a series of specs which each demonstrate an individual sub-feature, but will always be expressed from the outside in, mocking what does not yet exist;
  3. Next is a controller spec and the controller that it specifies, as a "traditional" developer-oriented spec (or set of specs). When done, I should be able to integrate this controller into the feature spec written previously (still using mock models) and demonstrate that the controller works;
  4. Now, I re-look at the model(s) involved, turning those into "live" specs and "real" code that works correctly with what I've written thus far; my feature specs and controller specs still pass;
  5. After the models and controllers are done and working with the feature specs, I'll do the views. I should have a very firm idea what's needed for these by now, as they've been used throughout (in mock or throwaway form) by the feature specs; this is where I write the code that emits the HTML, CSS and Script code for the UI.

That was my old workflow. I'm going to explore how to adapt and adjust this to more effectively and efficiently get the job done in light of Ian Cooper's talk and a (very near-future) reread of Kent Beck's Test-Driven Development: By Example. I say "effectively and efficiently" as my current project's spec-to-live-code ratio (in LOC) is approximately 13:4. I think most of us would agree that that's a very strong code smell.

Thursday, 2 September 2010

Patterns and Anti-Patterns: Like Matter and Anti-Matter

Well, that's a few hours I'd like to have over again.

As both my regular readers know, I've long been a proponent of agile software development, particularly with respect to my current focus on Web development using PHP.

One tool that I, and frankly any PHP developer worth their salt, use is PHPUnit for unit testing, a central practice in what's called test-driven development or TDD. Essentially, TDD is just a bit of discipline that requires you to determine how you can prove that each new bit of code you write works properly — before writing that new code. By running the test before you write the new code, you can prove that the test fails... so that, when you write only the new code you intend, a passing test indicates that you've (quite likely) got it right.

At one point, I had a class I was writing (called Table) and its associated test class (TableTest). Once I got started, I could see that I would be writing a rather large series of tests in TableTest. If they remained joined in a single class, they would quickly grow quite long and repetitive, as several tests would verify small but crucial variations on common themes. So, I decided to do the "proper" thing and decompose the test class into smaller, more focused pieces, and have a common "parent" class manage all the things that were shared or common between them. Again, as anyone who's developed software knows, this has been a standard practice for several decades; it's ordinarily the matter of a few minutes' thought about how to go about it, and then a relatively uneventful series of test/code/revise iterations to make it happen.

What happened this afternoon was not "ordinary." I made an initial rewrite of the existing test class, making a base (or "parent") class which did most of the housekeeping detail and left the new subclass (or "child" class) with just the tests that had already been written and proven to work. (That's the key point here; I knew the tests passed when I'd been using a single test class, and no changes whatever were made to the code being tested. It couldn't have found new ways to fail.)

Every single test produced an error. "OK," I thought, "let's make the simplest possible two-class test code and poke around with that." Ten minutes later, a simplified parent class and a child class with a single test were producing the same error.

The simplified parent class can be seen on this page, and the simplified child class here. Anybody who knows PHP will likely look at the code and ask, "what's so hard about that?" The answer is, nothing — as far as the code itself goes.

What's happening, as the updated comments on pastebin make clear, is that there is a name collision between the ''data'' item declared as part of my TableTest class and an item of the same name declared as part of the parent of that class, PHPUnit's PHPUnit_Framework_TestCase.

In many programming languages, conflicts like this are detected and at least warned about by the interpreter or compiler (the program responsible for turning your source code into something the computer can understand). PHP doesn't do this, at least not as of the current version. There are occasions when being able to "clobber" existing data is a desirable thing; the PHPUnit manual even documents instances where that behaviour is necessary to test certain types of code. (I'd seen that in the manual before; but the significance didn't immediately strike me today.)

This has inspired me to write up a standard issue-resolution procedure to add to my own personal Wiki documenting such things. It will probably make it into the book I'm writing, too. Basically, whenever I run into a problem like this with PHPUnit or any other similar interpreted-PHP tool, I'll write tests which do nothing more than define, write to and read from any data items that I define in the code that has problems. Had I done that in the beginning today, I would have saved myself quite a lot of time.

Namely, the three hours it did take me to solve the problem, and the hour I've spent here venting about it.

Thanks for your patience. I'll have another, more intelligent, post along shortly. (That "more intelligent" part shouldn't be too difficult now, should it?)

Saturday, 8 May 2010

She's Putting Me Through Changes...

...they're even likely to turn out to be good ones.

As you may recall, I've been using and recommending the Kohana PHP application framework for some time. Kohana now offer two versions of their framework:

  • the 2.x series is an MVC framework, with the upcoming 2.4 release to be the last in that series; and

  • the 3.0 series, which is an HMVC framework.

Until quite recently, the difference between the two has been positioned as largely structural/philosophical; if you wished to develop with the 'traditional' model-view-controller architecture, then 2.x (currently 2.3.4) is what you're after; with great documentation and tutorials, any reasonably decent PHP developer should be able to get Real Work™ done quickly and efficiently. Oh the other hand, the 3.0 (now 3.0.4.2) offering is a hierarchical MVC framework. While HMVC via 3.0 offers some tantalising capabilities, especially in large-scale or extended sequential development, there remains an enthusiastic, solid community built around the 2.3 releases.

One of the long-time problems with 2.3 has been how to do unit testing? Although vestigial support for both a home-grown testing system and the standard PHPUnit framework exists in the 2.3 code, neither is officially documented or supported. What this leads to is a separation between non-UI classes, which are mocked appropriately and tested from the 'traditional' PHPUnit command line, and UI testing using tools like FitNesse. This encourages the developer to create as thin a UI layer as practical over the standalone (and more readily testable) PHP classes which that UI layer makes use of. While this is (generally) a desirable development pattern, encouraging and enabling wider reuse of the underlying components, it's quite a chore to get an automated testing/CI rig built around this.

But when I came across a couple of pages like this one on LinkedIn (free membership required). This thread started out asking how to integrate PHPUnit with Kohana 2.3.4, and then described moving to 3.0 as

I grabbed Kohana 3, plugged in PHPUnit, tested it, works a treat! So we're biting the bullet and moving to K3! :)

I've done a half-dozen sites in Kohana 2.3, as I'd alluded to earlier. I've just downloaded KO3 and started poking at it, with the expectation to move my own site over shortly and, in all probability, moving 3.0 to the top of my "recommended tools" list for PHP.

Like the original poster, Mark Rowntree, I would be interested to know if and how anybody got PHPUnit working properly in 2.3.4.

Thanks for reading.

Wednesday, 10 February 2010

NIH v. An Embarrassment of Riches

One thing most good developers learn early on is not to "reinvent" basic technology for each new project they work on, The common, often corporate, antithesis to this is NIH, or "Not Invented Here." But sometimes, it's hard to decide which "giants" one wants to "stand on the shoulders of."

I've recently done a couple of mid-sized Web projects using PHP and the Kohana framework. A framework, as most readers know, is useful a) by helping you work faster b) by including a lot of usually-good code you don't have to write and maintain (but you should understand!). Good frameworks encourage you to write your own code in a style that encourages reuse by other projects that use the same framework.

One task supported by many frameworks is logging. There have also been many "standalone" (i.e., not integrated into larger systems) logging packages. The most well-known of these, and the source of many derivatives, is the Apache log4j package for Java. This has been ported, also as an Apache project, is log4php.

Log4php has saved me countless hours of exploratory debugging. I stand firmly with the growing group of serious developers who assert that if you use a reasonably agile process (with iterative, red, green, refactor unit testing) and make good use of logging, you'll very rarely, if ever, need a traditional debugger.

What does this have to do with Kohana? Well, Kohana includes its own relatively minimalist, straightforward logging facility (implemented as static methods in the core class, grumble, grumble). There's a standard place for such logs to be written to disk, and a nice little 'debug toolbar' add-on module that lets you see logging output while you're viewing the page that generated it.

So I ought to just ditch log4php in favor of the inbuilt logging system when I'm developing Kohana apps, right? Not so fast...

Log4php, as does log4j, has far more flexibility. I can log output from different sources to different places (file, system log, console, database, etc.), have messages written to more than one place (e.g., console and file), and so on. Kohana's logging API is too simple for that.

With log4php, I have total control over the logging output based on configuration information stored in an external file, not in the code itself. That means I can fiddle with the configuration during development, even deploy the application, without having to make any code changes to control logging output. The fewer times I have to touch my code, the less likely I am to inadvertently break something. Kohana? I only have one logging stream that has to be controlled within my code, by making Kohana method calls.

Many experienced developers of object-oriented software are uncomfortable with putting more than one logical feature into a class (or closely-related set of classes). Why carry around overhead you don't use, especially when your framework offers a nice extension capability via "modules" and "helpers"?. While there may sometimes be arguments for doing so (the PHP interpreter is notoriously slow, especially using dynamic features like reflection), I have always failed to understand how aggregating large chunks of your omniverse into a Grand Unified God Object™ pays dividends over the life of the project.

So, for now, I'll continue using log4php as a standalone tool in my various PHP development projects (including those based on Kohana). One thing that just went onto my "nice to do when I get around to it" list is to implement a module or similar add-on that would more cleanly integrate log4php into the surrounding Kohana framework.

This whole episode has raised my metaphorical eyebrow a bit. There are "best practices" for developing in OO (object-oriented) languages; PHP borrows many of these from Java (along with tools like log4php and PHPUnit, the de facto standard unit-test framework). I did a fairly exhaustive survey of the available PHP frameworks before starting to use Kohana. I chose it because it wasn't a "everything including several kitchen sinks" tool like Zend, it wasn't bending over backwards to support obsolete language misfeatures left over from PHP 4, and it has what looks to be a pretty healthy "community" ecosystem (unlike some once-heavily-flogged "small" frameworks like Ulysses). I'm not likely to stop using Kohana very soon. I may well have to make time to participate in that community I mentioned earlier, if for no other reason that to better understand why things are the way they are.

But that's the beauty of open source, community-driven development, surely?

Saturday, 27 June 2009

Remember to test your testing tools!

I've been doing some PHP development lately that involves a lot of SPL, or Standard PHP Library exceptions. I do test-driven development for all the usual reasons, and so make heavy use of the PHPUnit framework. One great idea that the developer of PHPUnit had was to add a test-case method called setExpectedException(), which should eliminate the need for you (the person writing the test code) to do an explicit try/catch block yourself. Tell PHPUnit what you expect to see thrown in the very near future, and it will handle the details.

But, as the saying says, every blessing comes with a curse (and vice versa). The architecture of PHPUnit pretty well seems to dictate that there can only be one such caught exception in a test method. In other words, you can't set up a loop that will repeatedly call a method and pass it parameters that you expect it to throw on; the first time PHPUnit's behind-the-scenes exception-catcher catches the exception you told it was coming, it terminates the test case.

Oops. But if you think about it, pretty expectable (pardon the pun). For PHPUnit to catch the exception, the exception has to get thrown and unwind the call stack past your test-case method. That makes it very difficult (read: probably impossible to do reliably inside PHPUnit's current architecture) to resume your test-case code after the call that caused the exception to be thrown — which is what you'd want if you were looping through these things.

This leaves you, of course, with the option of writing try/catch blocks yourself — which you were hoping to avoid but which still works precisely as expected.

Moral of the story: Beware magic bullets. They tend to blow up in your face when you least expect it.

Sunday, 12 October 2008

Things that make you go 'Hmmmm'

...or 'Blechhhh', as the case may be... I've been using PHP since the relative Pleistocene (I recently found a PHP3 script I wrote in '99). I've been using and evangelising test-driven development (TDD) for about the last five years, usually with most such work being done in C++, Java, Python or other traditionally non-Web languages (with PHP really only being amenable to that since PHP 5 in 2004). So here I am, puttering away on a smallish PHP project that I've decided to TDD from the very beginning. For one of the classes, I throw together a couple of simple constructor tests in PHPUnit, to start, such as:
require_once( 'PHPUnit/Framework.php' );

require_once( '../scripts/foo.php' );

class FooTest extends PHPUnit_Framework_TestCase
    public function testCanConstructBasic();
    {
        $Foo = new Foo( 'index.php' );
    }
    
    public function testCanConstructBasicWildcard()
    {
        $Foo = new Foo( '*.php' );
    }
};
And, as is right and proper, I code the minimal class necessary to make that pass:
class Foo
{
};
That's it. That's really it. No declaration whatever for the constructor or any other methods in the class. Since it doesn't subclass something else, we can't just say "oh, there might be a constructor up the tree that matches the call semantics."  PHPUnit will take these two files and happily pass the tests. I understand what's really going on here - since the class is empty, you've just defined a name without defining any usage semantics (including construction). I would say fine; not a problem. But I would think that PHPUnit should, if not give an error, then at least have some sort of diagnostic saying "Hey, you're constructing this object, but there are no ctor semantics defined for the class." I can see people new to PHP and/or TDD, who are maybe just working through and mentally adapting an xUnit tutorial from somewhere, getting really confused by this. I know I did a double-take when I opened the source file to add a new method (to pass a test not shown above) and saw nothing between the curly braces. On one level, very cool stuff. On another, equally but not always obviously important level, more than enough rope for you to shoot yourself in the foot. Or, to put it another way, even though I've been writing in dynamic languages off and on for ages, I still tend to think in incompletely dynamic ways. Sometimes this comes back and bites me.  Beware: here be (reasonably friendly, under the circumstances) dragons.

Tuesday, 12 August 2008

Test Infection Lab Notes

In a continuing series... As current and former colleagues and clients are well aware, I have been using and evangelizing test-driven development in one flavor or another since at least 2001 (the earliest notes I can find where I write about "100% test coverage" of code). To use the current Agile terminology, I've been "test-infected". My main Web development language is PHP 5.2 (and anxiously awaiting the goodness to come in 5.3), using Sebastian Bergmann's excellent PHPUnit testing framework. PHPUnit uses a well-documented convention for naming test classes and methods. One mistake often made by people in a hurry (novices or otherwise) is to neglect those conventions and then wonder why "perfectly innocuous" tests break. I fell victim to this for about ten minutes tonight, flipping back and forth between test and subject classes to understand why PHPUnit was giving this complaint:
There was 1 failure:
1) Warning(PHPUnit_Framework_Warning)
   No tests found in class "SSPFPageConfigurationTest".

FAILURES!
Tests: 1, Failures: 1.
about this code:
class SSPFPageConfigurationTest extends PHPUnit_Framework_TestCase
    public function canConstruct()
    {
        $Config = new SSPFPageConfiguration();
        $this->assertTrue( $Config instanceof SSPFPageConfiguration );
    }
};
which was "obviously" too simple to fail. The wise programmer is not afraid to admit his errors, particularly those arising from haste. The novice developer proceeds farther on the path to enlightenment; the sage chuckles in sympathy, thinking "been there, done that; nice to be reminded that other people have, too". May you do a better job of keeping your koans in a nice, neat cone.

Saturday, 10 May 2008

ANFSD: starting a series to scratch an itch

(And Now For Something Different, for the 5LA-challenged amangst you...)

I've made my living, for about half my career, on the proposition that if I stayed (at least) three to six months ahead of (what would become) the popular mean in software technology, I'd be well-positioned to help out when Joe Businessman or Acme Corporation came along and hit the same technology — with the effect of "Refrigerator" Perry hitting a reinforced-concrete wall. This went reasonably well as "the market" started using PCs, then GUIs, then object-oriented programming, and then "that Internet thingy" (Shameless plug: résumé here or in PDF format).

In other ways, I've been a staunch traditionalist. I've used IDEs from time to time, because I was working as part of a team that had a standard tool set, or because I was programming for Microsoft Windows and the Collective essentially requires that that be done in their (seventh-rate) IDE unless you want to decrease productivity by several dozen orders of magnitude.

Otherwise, just give me KATE or BBEdit and a command-line compiler and I'm happy. This continued for a significant chunk of the history of PCs, until I decided that, for the Java work I was doing, I really needed some of the whiz-bang refactoring and other tie-ins supported by Eclipse and NetBeans. Then I started hacking around on a couple of open-source C++ packages and thought I'd give the Eclipse C/C++ Development Tooling a try. Now I'm coming up to speed on wxWidgets development in C++.

During this learning-curve week, I spent a lot of time browsing the Web for samples, tutorials and so on. To call most of them execrable is to give them unwarranted praise. Having recently resumed work on a Web development book dealing with useful standards and helpful process, and since I've been doing C++ off and on since the mid-80s, I thought I'd start a series of blog entries that would:

  • Document some of the traps and tricks I hit to get a simple wxWidgets program into Eclipse;
  • Illustrate some early, very simple refactoring of the simple program to get a bit more sanity;
  • Get Subversion and Eclipse playing well together;
  • Explain why I think parts of teh Agile method are simulataneously nothing new and the best new idea to hit development in a very long time.
  • Start using an automated-testing tool to build confidence during debugging and refactoring; and
  • Using a code-documentation tool in the spirit of JavaDoc to produce nice technical/API docs.
At the end of the series, you'll have a pretty good idea of how I feel most projects (regardless of underlying technology and specific tools) "should" be done. You'll have seen a very simple walk-through of the process, demonstrated using Linux, Eclipse, C++ and wxWidgets, but actually quite broadly applicable well beyond those bounds.

Please send comments, reactions, job offers, etc., to my email. Death threats, religious pamphlets, and other ignorance can, as always, go to /dev/null. Thanks!

Thursday, 27 October 2005

Craft, culture and communication

This is a very hard post for me to write. I've been wrestling with it for the last two days, and yes, the timestamp is accurate. If I offend you with what I say here, please understand that it is not meant to be personal. Rather, it probably means you may want to pay close attention.

When I was in university, back in the Pleistocene, I had a linguistics professor who went around saying that

A language is the definition of a specific culture, at a specific place, at a specific time. Change the culture, the place or the time, and the language changes — and if the language changes, it means that something else has, too.
Why is this relevant to the craft of software development?

Last weekend, I picked up a great book, Agile Java™: Crafting Code with Test-Driven Development, over the weekend at Kinokuniya bookstore at KLCC. There are maybe half a dozen books that any serious developer recognises as landmark events in the advancement of her or his craft. This, ladies and gentlemen, is one of them. If you are at all interested in Java, in high-quality software development, or in managing a group of software developers under seemingly impossible schedules, and if you are fully literate in the English language as a means of technical communication, then bookmark this page, go grab yourself a copy, read it, come back, and reread it tomorrow. It's not perfect — I would have liked to see the author use TestNG as the test framework rather than its predecessor, JUnit) but those are more stylistic quibbles than substance; if you go through the lessons in this book, you will have some necessary tools to improve your mastery of the craft of software development, specifically using the Java language and platform.

I immediately started talking up the book to some of my projectmates at Cilix, saying "You gotta learn this".

And then I stoppped and thought about it some more. And pretty much gave up the idea of evangelising the book — even though I do intend to lead the group into the use of test-driven development. It is the logical extension of the way I have been taught (by individuals and experience) to do software development for nearly three decades now. It completely blew several of the premises I was building a couple of white papers on completely away — and replaced them with better ones (yes, Linda, it's coming Real Soon Now). TDD may not solve all your project problems, cure world poverty or grow hair on a billiard ball, but it will significantly change the way you think about — and practise — the craft of software development.

If you understand the material, that is.

There are really only three (human) languages that matter for engineering and for software: English, Russian and (Mandarin) Chinese, pretty much in that order. Solid literacy and fluency in Business Standard English and Technical English will enable you to read, comprehend and learn from the majority of technical communication outside Eastern Europe and China (and the former Soviet-bloc engineers who don't already know English are learning it as fast as they can). China was largely self-reliant in terms of technology for some time, for ideological and economic reasons; there's an amazing (to a Westerner) amount of technical information available in Chinese — but English is gaining ground there too, if initially often imperfect in its usage.

Coming back to why my initial enthusiasm about the book has cooled, for those of you who aren't actually from my company, I work at an engineering firm in Kuala Lumpur, Malaysia called Cilix. We do a lot of (Malaysian) government contract work in various technical areas, but we are also trying to grow a commercial-software (including Web applications) development group. Until recently, I managed that group; after top management came to its senses, I am now in an internal-consulting role. As Principal Technologist, I see my charter as consulting to the various groups within the Company on (primarily) software-related and development-related technologies, techniques, tools and processes, with a view to make our small group more effective at competition with organisations hundreds of times our size.

Up to now, we've been in what a Western software veteran would recognise as "classic startup mode": minimal process, chaotic attempts at organisation, with project successes attained through the heroic efforts of specific, talented individuals. My job is, in part, to help change that: to help us work smarter, not harder. Enter test-driven development, configuration management, quality engineering, and documentation.

Documentation. Hmmm. Oops.

One senior manager in the company recently remarked that there are perhaps five or six individuals in the entire company with the technical abilities, experience and communication abilities to help pull off the type of endeavour — both in terms of the project and how we go about it. Two or at most three of those individuals, to my knowledge, are attached to the project, and one of these is less than sanguine about the currency of technical knowledge and experience being brought to bear.

Since arriving on the project, I have handed two books to specific individuals, with instructions to at lesat skim them heavily and be able to engage in a discussion of the concepts presented in a week to ten days' time. Despite repeated prodding, neither of those individuals appeared to make that level of effort. This is not to complain specifically about the individuals; informally asking developers within the group how many technical books they had read in the last 18 months averaged solidly in the single digits. A similar survey taken in comparable groups at Microsoft, Borland, Siemens Rolm or Weyerhaeuser — all companies where I have worked previously — would likely average in the mid-twenties at least. So too, I suspect, would surveys at Wipro, Infosys or PricewaterhouseCoopers, some of our current and potential competitors.

While American technical people are rightly famous for living inside their own technical world and not getting out often enough, that provides only limited coverage as an excuse. In a craft whose very raison d'ètre is information, an oft-repeated truism (first attributed, to my knowledge, to Grace Hopper, that "90% of what you know will be obsolete in six months; 10% of what you know will never be obsolete. Make sure you get the full ten percent." If you don't read — both books and online materials — how can a software (or Web) developer have any credible hope of remaining current or even competent at his or her craft?

That principle extends to organisations. If the individual developers do not exert continuous efforts to maintain their skills (technical and linguistic) at a sufficiently high level, and their employer similarly chooses not to do so, how can that organisation remain competitive over the long term, when competitiveness may be directly linked to the efficiency and effectiveness with which that organisation acquires, utilises and expands upon information — predominantly in English? How can smaller organisations compete against larger ones which are more likely to have the raw manpower to scrape together a team to accomplish a difficult, leading-edge project? "Learn continuously or you're gone" was an oft-repeated mantra from business and industry participants in a recent Software Development Conference and Expo, an important industry conference. What of the individuals or organisations who choose not to do so?

Those of us involved in the craft of software and Web development have an obvious economic and professional obligation to our own careers to keep our own skills current. We also have an ethical, moral (and in some jurisdictions, fiduciary, legal) obligation to encourage our employers or other professional organisations to do so. There is no way of knowing whether, or how successfully, any given technology, language or practise will be in ten years' time, or even five. How many times has the IT industry been rocked by sudden paradigm shifts — the personal computer, the World Wide Web — which not only created large new areas of opportunity, but severely constrained growth in previously lucrative areas? I came into this industry at a time when (seemingly) several million mainframe COBOL programmers were watching their jobs go away as business moved first to minicomputers, then to PCs. History repeated itself with the shift to graphical systems like the Apple Macintosh and Microsoft Windows, and again with the World Wide Web and the other Internet-related technologies, and yet again with the offshoring craze of the last five years. What developer, or manager, or director, has the hubris to in effect declare that it won't happen again, that there own't be a new, disruptive technology shift that obsoletes skills and capabilities?

But whatever shift there is, whatever new technology comes along that turns college dropouts into megabillionaires, that changes the professional lives of millions of craftspeople... it will almost certainly be documented in English.