Showing posts with label quality. Show all posts
Showing posts with label quality. Show all posts

Saturday, 14 July 2012

Getting Your Stuff Done, or Stuff Done To You

This is the response I wanted to leave to "MrChimei" on the spot-on YouTube video, "Steve Jobs Vs. Steve Ballmer". Since YouTube has such a tiny (but understandable) limit on comment size, a proper response would not fit. Therefore...


Let me put it this way. It doesn't matter whether you're speaking out of limited experience, or limited cognition, or what; your flippant attitude will not survive first contact with reality (to paraphrase von Moltke).

I'm a Windows developer who's been developing for Windows since Windows 1.0 was in early developer beta, on up to Windows 8. I had nearly ten years professional development experience on five platforms before I ever touched Windows. I had three stints at Microsoft back when that was cool, and sold most of my stock when it was still worth something.

I've also supported users of Windows and various operating systems, from groups of 3-5 small businesspeople on up to being comfortably high in the operational support pecking order in a Fortune 100 company. I've seen what helps and doesn't help intelligent non-geeks get their work done.

Both in that position, and in my own current work, I've observed and experienced order-of-magnitude-or-better differences in productivity, usability, reliability, supportability… all in Apple's favour. I've worked with and for people who became statistics junkies out of an emotional imperative to "prove" Windows better, in any way, than other systems. The next such individual I meet who succeeds, out of a sample of over 20 to date, will be the very first.

In 25 years, I have never experienced a Windows desktop machine that stayed up and fully functional for more than approximately 72 hours, *including* at Redmond, prior to a lightly-loaded Windows 7 system.

In the last 6 years of using Macs and clones half-time or better, I have never experienced a Mac that failed to stay up and working for less than a week. In the last five years, my notes show, I've had two occasions where a hard reset to the Mac I'm typing this on was necessary; both turned out to be hardware faults. Prior to Windows 7, any Windows PC that did not need to be hard-rebooted twice in a given fortnight was a rarity. Windows 7 stretched that out to 6 weeks, making it by far the most stable operating system Microsoft have shipped since Windows NT 3.51. (Which I will happily rave about at length to any who remember it.)

For many years, I too was a Windows bigot. The fact that Unix, then OS/2, then Mac OS had numerous benefits not available in Windows was completely beneath my attention threshold. The idea that (on average over a ten-year period) some 30% of my time seated at a Windows PC was devoted to something other than demonstrably useful or interesting activity was something that I, like the millions of others bombarded by Ziff-Davis and other Microsoft propaganda organs, took as the natural order of things.

Then I noticed that Mac users were having more fun. "Fine," I thought, "a toy should bring amusement above all." Then I noticed that they were getting more and better work done. "Well," I said to myself, "they're paying enough extra for it; they should get some return on their investment. I'm doing well enough as is."

And then, within the space of less than a year, all five of my Windows systems were damaged through outside attack. "Why," I asked. "I've kept my antivirus current. I've installed anti-spyware and a personal firewall in addition to the (consumer-grade) router and firewall connecting me to the Internet. I don't browse pr0n or known-dodgy sites. I apply all security patches as soon as they're released. Why am I going to lose this development contract for lack of usable systems?"

I discovered a nasty little secret: it's technically impossible to fully protect a Windows PC from attacks, using tools that a reasonably-bright eight-year-old can master in a Saturday afternoon. People responsible for keeping Windows PCs have known this for over a decade; it's why the more clueful ones talk about risk mitigation than prevention, with multi-layered recovery plans in place and tested rather than leaving all to chance. For as long as DSL and cable Internet connections have been available, it's taken less time to break into a new, "virgin" Windows PC than to fully patch and protect it against all currently-likely threats.

People used to think that using cocaine or smoking tobacco was healthy for you, too.

What I appreciate most about the Mac is that, no matter what, I can sit down in front of one and in a minute or less, be doing useful, interesting work. I don't have the instability of Windows. I don't have the sense that I'm using something that was designed for a completely different environment, as Windows too closely resembles the pre-network (let alone pre-Internet) use of isolated personal computers. Above all, I appreciate the consistency and usability that let me almost forget about the tools I'm using to work with my data, or with data out in the world somewhere, than on what I'm trying to accomplish.

One system treats its users as customers, whose time, efficiency and comfort are important and who know they have choices if they become dissatisfied. The other platform treats its users as inmates, who aren't going to leave no matter what... and if that's true, then quality sensibly takes a back seat to profitability.

Which would you recommend to your best friend? Or even to a respected enemy?

Thursday, 24 November 2011

ANFSD: Fascism Bites Everyone In The Pocket (Among Other Places)

Fascism should more properly be called corporatism, for it represents the fusion of State and corporate power.

— B. Mussolini

If you've lived in the USSR, former Soviet republics, or much of south Asia (including Singapore), you're quite familiar with the concept of "exclusive distributors", which the Free World seems to have thrown on the ash-heap of history at roughly the same time as leaded petrol. For those who've never had the displeasure, it works just like it sounds: one appointed business entity is the only way for subjects of a particular country to lawfully access the products or services of a particular (foreign) company. In places like Singapore which follow a State-capitalism model, that exclusive agent customarily has strong ties to either the Government or to entities acting as agents of the Government (sovereign-wealth funds, public officials who also run nominally-private-sector companies, and so on). This rarely, if ever, is a boon for the consumer.

Case in point: today, I wanted to buy a good fan, a Vornado Compact 530, comparable to the Vornado fans I owned when in the States.

Naturally, I can't buy a fan from the Vornado Website, though it gives me plenty of information about the thing. A little search engine-fu tells me that Home-Fix are the exclusive distributor for Vornado in Singapore.

Naturally, I can't order anything, or even get any product information, from Home-Fix's site. I could, however, get the phone number(!) for their nearest store, and called to enquire about availability and pricing.

The conversation went something like this:

Me: Can you tell me if you have Vornado fans?
Clerk: Yes, we do. Which one are you looking for?
Me: The "Compact 630".
Clerk: Yes, we have that one, in white and black.
Me: How much is it?
Clerk: S$170.
Me: $170?!? So your rate on the US dollar is about three Singapore dollars, then? The list price on the Vornado Website is US $49.99!
Clerk: (as to a very small, very slow child) Oh, that's the online price. It's bound to be cheaper.
Me: Well, I've done some checking on various US, European and Australian stores; the walk-in retail price is right about the same.
Clerk: Well, our price is S$170.
Me: Well, thank you for your time.

Not to be terribly unfair to either Home-Fix or the clerk; that's the way the system operates here, and any company that didn't jack their prices up to whatever the marks will pay isn't doing things The Singapore Way. It's not as though it's actually people's own money; they're just holding it until those who pull the levers of State decide they want it back2.

So, ragging at Home-Fix or any of the many, many other businesses whose prices have no apparent correlation to corresponding prices elsewhere won't accomplish anything. If, as was let slip during the Singapore Presidential "election" this year, the Government's sovereign-wealth funds and people connected to High Places really do control 2/3 or more of the domestic Singapore economy, then complaining about any individual companies is rather like the man who's been attacked by a chainsaw-wielding madman who then worries about all the blood on his shirt. Fix the real problems, and the small ones will right themselves.

Incidentally, this also illustrates why I support the Occupy movement. If Americans want to see what another decade or two of economic and political polarisation between the top 400 families and the rest of the country will look like, Singapore is a good first-order approximation. And some sixty percent of Singaporeans apparently either couldn't be bothered to think differently, or were too afraid to.

Sigh. I still need to find a reliable, efficient fan.

Footnotes:

1. According to the current XE.com conversion, 170 Singapore dollars (S$170) is approximately US$130.19,, or some 260 percent of the list price. In a free society, that would be called "gouging"; here in Singapore it's called "buying a foreign product". Remember, no foreign companies really operate here. When you walk into a McDonald's or Starbucks or Citibank in Singapore, you're walking into a local, almost invariably Government-linked, franchise or local representative. The quality and experience difference can be, charitably, stark. (Return)

2. Known in the US as the "thug defense", after a Los Angeles mugger who used that line of "reasoning" as the basis for an attempted pro se defence in court. He was, of course, unsuccessful. (Return)

Tuesday, 28 September 2010

Don't Waste My Time.

What follows is a critique and a complaint, not a rant. What's the difference, you ask? A rant, in my view, is a primal scream of frustration and near-impotent rage that often doesn't let a few (allegedly) non-essential facts get in the way of a good thrashing. It also takes far more energy than I'm able to summon at present, for reasons that may become clear below.


As I've mentioned previously, I've been kicking the tires on the (relatively new) Kohana 3.0 application framework for PHP Web development. I'd previously used (and enthused about) the 2.3.x ("KO2") release; I was itching to get my hands dirty with 3.0.1 After a couple of dozen hours spent plinking away at it, however, I'm reevaluating my choices.

Two of the things that endeared the earlier system to me were the good-to-excellent, copious documentation and the active, always-willing-to-help user community (which, quite frankly, are the backbones of any open-source project).

KO3 is a different proposition entirely than KO2. As I write this, the eighth bug-fix update to 3.0 is available on the Web site. Since this is a new "the same, but different" project, all the users are starting fresh, so there can't be the same level of everyone-helps-everyone-else camaraderie as with KO2. This puts a foreseeable, far greater burden on the documentation to help people get productive as quickly and effectively as possible. However, the documentation, quite charitably, remains in a beta-quality state. This is true in comparison to both the earlier release and to other PHP Web frameworks such as Symfony2, CakePHP, FLOW3 and (notoriously) Zend. With most of these other frameworks, as with KO2, it was a quick, straightforward process figuring out how to get from the "hello-world" style demos, to being able to create a useful-but-simple site, to branching out from there. It's taken four times longer to get half as far with KO3 as with KO2.

Judging by comments in the Kohana support forums, I'm not alone in this; the documentation has been roundly panned by virtually all users who've bothered to comment. There's been far too much defensive "no, it isn't bad, you just need to read a bit more, and use the source, Luke" attitude from the project folks. During the KO2 lifecycle, the attitude I understood from their responses was more along the lines of "we know there are a few problems; we're working on them," quickly followed by "take a look at this." I don't know if 3.0 is so much more complex than 2.x that they simply don't have the bandwidth to document things to their previous standards. Frankly, I don't care any more.

I've decided that my existing projects that have been started in Kohana 2.3 will remain there; possibly moving to 2.4 when it becomes the officially-supported release. But I do not plan to invest any more time and effort into Kohana 3.0, at least until the new project has had some time to mature. I fully recognise the potentially self-defeating attitude inherent in that viewpoint. Virtually any open-source project depends on its community to "plug the holes" that the "official" project maintainers don't have time for or deliberately leave as open questions for the community. Well-run community projects are extremely collaborative, communication-rich environments.

Other projects take a "vendor/product" approach, essentially saying "Here's everything you'll need, soup-to-nuts; we'd really appreciate it if you built around the edges and took things deeper in your direction of interest, but the core that we're responsible for is solid." Those "vendors", the Zends and Sensio Labs of the world, have rich open-source offerings that they use as a platform to build (or offer) what actually pays the bills.

While I have a strong philosophical and experiential preference for community-driven (or at least -oriented) projects, there have to be times when I just want to Get Things Done rather than embark on an endless voyage of discovery.3 It is at those times that I'll reach for something that "just works" well enough for me to accomplish whatever it is I'm trying to do at the moment, whether it's to write a book or to bring up a client's Web site. I know and accept that any new tool, or new version of a tool I've used previously, will require a certain amount of time to "get up to speed" on. I don't know everything (or necessarily anything) before I learn it; the most I can hope for (and what I really do insist on) is that things make sense within the logical and semantic frameworks4 that they're expressed in, and that that expression is accessible, complete and correct enough that I can feel like I'm making progress. This invariably involves some sort of documentation; whether a "conventional" or online manual, a Wiki, or a user forum; the format is less important to me than the attributes I mentioned earlier.

Kohana 3.0, as it currently stands, does not meet that standard for me. And so I'm back in a feverish learn-and-evaluate mode with at least two other frameworks. I have projects to finish, and I have several chapters of a book I'm working on that had initially been written around Kohana 2, and will now need to be substantially redone.5

I intend to give each of those new candidate frameworks the same amount of time that it took me to get productive in Kohana 2.x (which was significantly less than the "industry leader," as I've previously mentioned). This is going to be an interesting week and a half or so, in the Chinese-curse sense of the word.

Footnotes:

1. Technical reasons for moving to KO3 included the switch from a model-view-controller (MVC) architecture to hierarchical MVC (or HMVC); if you know what these mean, you should know this is a Very Big Deal™. Additionally, I've found it a Very Bad Thing to tie myself to a legacy (obsolescent) project, and the end of KO2 is being made very plain on the Kohana Web site.(Return)

2. If the new, pre-release Symfony 2 code is as good as the documentation, we're in for a real treat.

3. I am typing this on a Mac rather than a Linux box, after all (though I have three Linux VMs running at the moment).(Return)

4. This implies, of course, that there are "logical and semantic frameworks" sufficiently visible, understandable and relevant to the task at hand. (Return)

5. I certainly don't want to fall into the same trap as a previous series of PHP books, which relied on the obsolete and inadequate Ulysses framework. Ulysses has been justly and productively panned, which has to reflect poorly on the aforementioned book (which I happen to own) and its authors. (Return)

Thursday, 2 September 2010

Patterns and Anti-Patterns: Like Matter and Anti-Matter

Well, that's a few hours I'd like to have over again.

As both my regular readers know, I've long been a proponent of agile software development, particularly with respect to my current focus on Web development using PHP.

One tool that I, and frankly any PHP developer worth their salt, use is PHPUnit for unit testing, a central practice in what's called test-driven development or TDD. Essentially, TDD is just a bit of discipline that requires you to determine how you can prove that each new bit of code you write works properly — before writing that new code. By running the test before you write the new code, you can prove that the test fails... so that, when you write only the new code you intend, a passing test indicates that you've (quite likely) got it right.

At one point, I had a class I was writing (called Table) and its associated test class (TableTest). Once I got started, I could see that I would be writing a rather large series of tests in TableTest. If they remained joined in a single class, they would quickly grow quite long and repetitive, as several tests would verify small but crucial variations on common themes. So, I decided to do the "proper" thing and decompose the test class into smaller, more focused pieces, and have a common "parent" class manage all the things that were shared or common between them. Again, as anyone who's developed software knows, this has been a standard practice for several decades; it's ordinarily the matter of a few minutes' thought about how to go about it, and then a relatively uneventful series of test/code/revise iterations to make it happen.

What happened this afternoon was not "ordinary." I made an initial rewrite of the existing test class, making a base (or "parent") class which did most of the housekeeping detail and left the new subclass (or "child" class) with just the tests that had already been written and proven to work. (That's the key point here; I knew the tests passed when I'd been using a single test class, and no changes whatever were made to the code being tested. It couldn't have found new ways to fail.)

Every single test produced an error. "OK," I thought, "let's make the simplest possible two-class test code and poke around with that." Ten minutes later, a simplified parent class and a child class with a single test were producing the same error.

The simplified parent class can be seen on this page, and the simplified child class here. Anybody who knows PHP will likely look at the code and ask, "what's so hard about that?" The answer is, nothing — as far as the code itself goes.

What's happening, as the updated comments on pastebin make clear, is that there is a name collision between the ''data'' item declared as part of my TableTest class and an item of the same name declared as part of the parent of that class, PHPUnit's PHPUnit_Framework_TestCase.

In many programming languages, conflicts like this are detected and at least warned about by the interpreter or compiler (the program responsible for turning your source code into something the computer can understand). PHP doesn't do this, at least not as of the current version. There are occasions when being able to "clobber" existing data is a desirable thing; the PHPUnit manual even documents instances where that behaviour is necessary to test certain types of code. (I'd seen that in the manual before; but the significance didn't immediately strike me today.)

This has inspired me to write up a standard issue-resolution procedure to add to my own personal Wiki documenting such things. It will probably make it into the book I'm writing, too. Basically, whenever I run into a problem like this with PHPUnit or any other similar interpreted-PHP tool, I'll write tests which do nothing more than define, write to and read from any data items that I define in the code that has problems. Had I done that in the beginning today, I would have saved myself quite a lot of time.

Namely, the three hours it did take me to solve the problem, and the hour I've spent here venting about it.

Thanks for your patience. I'll have another, more intelligent, post along shortly. (That "more intelligent" part shouldn't be too difficult now, should it?)

Friday, 18 June 2010

I Thought Standard Libraries Were Supposed to be Better...

...than hand coding. Either the PHP folks never got that memo, or I'm seriously misconceptualising here.

Case in point: I was reading through Somebody Else's Code™, and I saw a sequence of "hand-coded" assignments of an empty string to several array entries, similar to:

    $items[ 'key2' ] = '';
    $items[ 'key1' ] = '';
    $items[ 'key6' ] = '';
    $items[ 'key3' ] = '';
    $items[ 'key8' ] = '';
    $items[ 'key5' ] = '';
    $items[ 'key4' ] = '';
    $items[ 'key7' ] = '';

I thought, "hey, hang on; there's a function to do easy array merges in the standard library (array_merge); surely it'd be faster/easier/more reliable to just define a (quasi-)constant array and merge that in every time through the loop?"

Fortunately, I didn't take my assumption on blind faith; I wrote a quick little bit to test the hypothesis:


$count = 1e5;
$data = array(
        'key2' => '',
        'key1' => '',
        'key6' => '',
        'key3' => '',
        'key8' => '',
        'key5' => '',
        'key4' => '',
        'key7' => '',
        );
$realdata = array();

$start = microtime( true );
for ( $loop = 0; $loop < $count; $loop++ )
{
    $realdata = array_merge( $realdata, $data );
};
$elapsed = microtime( true ) - $start;
printf( "%ld iterations with array_merge took %7.5f seconds.\n", $count, $elapsed );

$start = microtime( true );
for ( $loop = 0; $loop < $count; $loop++ )
{
    $data[ 'key2' ] = '';
    $data[ 'key1' ] = '';
    $data[ 'key6' ] = '';
    $data[ 'key3' ] = '';
    $data[ 'key8' ] = '';
    $data[ 'key5' ] = '';
    $data[ 'key4' ] = '';
    $data[ 'key7' ] = '';
};
$elapsed = microtime( true ) - $start;
printf( "%ld iterations with direct assignment took %7.5f seconds.\n", $count, $elapsed );

I ran the tests on a nearly two-year-old iMac with a 3.06 GHz Intel Core 2 Duo processor, 4 GB of RAM, OS X 10.6.4 and PHP 5.3.1 (with Zend Engine 2.3.0). Your results may vary on different kit, but I would be surprised if the basic results were significantly re-proportioned. The median times from running this test program 20 times came out as:

Assignment process Time (seconds) for 100,000 iterations
array_merge0.41995
Hand assignment0.15569

So, the "obvious," "more readable" code runs nearly three times slower than the existing, potentially error-prone during maintenance, "hand assignment." Hang on, if we used numeric indexes on our array, we could use the array_fill function instead; how would that do?

Adding the code:

    $data2 = array();
    $data2[ 0 ] = '';
    $data2[ 1 ] = '';
    $data2[ 2 ] = '';
    $data2[ 3 ] = '';
    $data2[ 4 ] = '';
    $data2[ 5 ] = '';
    $data2[ 6 ] = '';
    $data2[ 7 ] = '';
$start = microtime( true );
for ( $loop = 0; $loop < $count; $loop++ )
{
    $data2 = array_fill( 0, 8, '' );
};
$elapsed = microtime( true ) - $start;
printf( "%ld iterations with array_fill took %7.5f seconds.\n", $count, $elapsed );

produced a median time of 0.21475 seconds, or some 37.9% slower than the original hand-coding.

For folks coming from other, compiled languages, such as C, C++, Ada or what-have-you, this makes no sense whatsoever; those languages have standard libraries that are not only intended to produce efficiently-maintainable code, but (given reasonably mature libraries) efficiently-executing code as well. PHP, at least in this instance, is completely counterintuitive (read: counterproductive): if you're in a loop that will be executed an arbitrary (and arbitrarily large) number of times, as the original code was intended to be, you're encouraged to write code that invites typos, omissions and other errors creeping in during maintenance. That's a pretty damning indictment for a language that's supposedly at its fifth major revision.

If anybody knows a better way of attacking this, I'd love to read about it in the comments, by email or IM.

Saturday, 8 May 2010

She's Putting Me Through Changes...

...they're even likely to turn out to be good ones.

As you may recall, I've been using and recommending the Kohana PHP application framework for some time. Kohana now offer two versions of their framework:

  • the 2.x series is an MVC framework, with the upcoming 2.4 release to be the last in that series; and

  • the 3.0 series, which is an HMVC framework.

Until quite recently, the difference between the two has been positioned as largely structural/philosophical; if you wished to develop with the 'traditional' model-view-controller architecture, then 2.x (currently 2.3.4) is what you're after; with great documentation and tutorials, any reasonably decent PHP developer should be able to get Real Work™ done quickly and efficiently. Oh the other hand, the 3.0 (now 3.0.4.2) offering is a hierarchical MVC framework. While HMVC via 3.0 offers some tantalising capabilities, especially in large-scale or extended sequential development, there remains an enthusiastic, solid community built around the 2.3 releases.

One of the long-time problems with 2.3 has been how to do unit testing? Although vestigial support for both a home-grown testing system and the standard PHPUnit framework exists in the 2.3 code, neither is officially documented or supported. What this leads to is a separation between non-UI classes, which are mocked appropriately and tested from the 'traditional' PHPUnit command line, and UI testing using tools like FitNesse. This encourages the developer to create as thin a UI layer as practical over the standalone (and more readily testable) PHP classes which that UI layer makes use of. While this is (generally) a desirable development pattern, encouraging and enabling wider reuse of the underlying components, it's quite a chore to get an automated testing/CI rig built around this.

But when I came across a couple of pages like this one on LinkedIn (free membership required). This thread started out asking how to integrate PHPUnit with Kohana 2.3.4, and then described moving to 3.0 as

I grabbed Kohana 3, plugged in PHPUnit, tested it, works a treat! So we're biting the bullet and moving to K3! :)

I've done a half-dozen sites in Kohana 2.3, as I'd alluded to earlier. I've just downloaded KO3 and started poking at it, with the expectation to move my own site over shortly and, in all probability, moving 3.0 to the top of my "recommended tools" list for PHP.

Like the original poster, Mark Rowntree, I would be interested to know if and how anybody got PHPUnit working properly in 2.3.4.

Thanks for reading.

Tuesday, 22 December 2009

Blast from the Past

Another in a continuing series...

Microcomputer(as PCs were called before the IBM PC) veterans of a certain vintage well remember that most counterintuitively productive of productivity tools, WordStar 3.3 (and earlier). The hegemon of its day, WordStar used what at first (and usually fifth) inspection appeared to be whimsical, arbitrary key combinations for commands. Ctrl K-H for Help was invariably what new users first memorised. All through the 1980s and well beyond, any word-processing software that came onto the market had some degree of WordStar-compatible commands, either as their main command set or as a bolt-on to wean folks onto the "new" way of doing things. This was even true for the first several releases of WordPerfect and of Microsoft Word. (Word today has several available add-ons to add WordStar command compatibility.)

Why was this so popular? As noted in the Wikipedia article:

...the "diamond" of Ctrl-S/E/D/X moved the cursors one character or line to the left, up, right, or down. Ctrl-A/F (to the outside of the "diamond") moved the cursor a full word left/right, and Ctrl-R/C (just "past" the Ctrl keys for up and down) scrolled a full page up/down. Prefacing these keystrokes with Ctrl-Q generally expanded their action, moving the cursor to the end/beginning of the line, end/beginning of the document, etc. Ctrl-H would backspace and delete. Commands to enable bold or italics, printing, blocking text to copy or delete, saving or retrieving files from disk, etc. were typically a short sequence of keystrokes, such as Ctrl-P-B for bold, or Ctrl-K-S to save a file. Formatting codes would appear on screen, such ^B for bold, ^Y for italics, and ^S for underscoring.

Although many of these keystroke sequences were far from self-evident, they tended to lend themselves to mnemonic devices (e.g., Ctrl-Print-Bold, Ctrl-blocK-Save), and regular users quickly learned them through muscle memory, enabling them to rapidly navigate documents by touch, rather than memorizing "Ctrl-S = cursor left."

Why is this relevant (or even interesting) today? Besides the lessons to be learnt about interface design, it's interesting to note how many editors out there still pay homage to WordStar. I stumbled across joe again this morning; it's available on essentially all Linux and BSD distributions, with versions built for other systems as well (e.g., Mac OS X and Cygwin/Windows), and source freely available if your platform isn't yet supported or you just like to tinker around on one that is.

What makes this fairly scary for us old-timers is just how quickly the old finger habits come back. If you had more than a year's experience beating your head against the original WordStar, I dare you to work with joe or its ilk for more than a few minutes before "how do I do...?" completely falls away from your thoughts and you're just typing as fast as you can think.

For that's the real beauty of this type of "primitive", what-you-see-isn't-what-you-get interface: you're not distracted by the ephemera of making your work appear "just so", and can actually focus on the work of writing. And that, in our click-and-drool modern interfaces, is what we've lost -- and no amount of clever code wizardry on the part of the interface designers can bring us back to that. Why? Because of basic human nature - if we see a button, at some point we'll want to push that button - "just to make things look better." And, all of a sudden, we notice that the entire morning has flown past while we were focusing on the first three paragraphs of a major report that's due this afternoon. Oops.

There's a reason why almost every tool aimed at professional writers -- people who make their living at x cents per word -- have "stripped down", minimalist interfaces, at least as an option. It's the same reason that far too many truly "old-school" writers give for writing on paper and then typing (or having someone type) their words into a computer: the fewer distractions you have, while still being able to do what you're trying to do, the more productive you'll be at it.

That concept extends far, far beyond the writing of prose -- and has too often been lost or forgotten in those other areas as well. Pity.

Wednesday, 18 November 2009

Two steps forward, three steps back

Alternate title: ARRRRRGGGGGHHHHHHH!!!!!

Both of you Gentle Readers may have noticed that I've been away from the blog for a while, and that a few posts that were previously published have gone missing. I've been busy fighting some other fires for a while, and my current network access has lacked the stability and efficiency that local propaganda would have you expect.

This evening (Wednesday 18th) I came across a piece of nifty-looking software, MacJournal by Mariner Software. It looks great — software that would let me compose/revise blog posts offline, in a native Mac app with nice organizational features and so on, at an attractive price, and with a 15-day evaluation period thrown in, just so you can try before you buy.

"Cool," I thought; "I'll be able to multitask on my shiny new MBP that's coming Any Day Now™."

Downloading and installing the eval copy went just as you'd expect; drag an icon into a folder, wait for the "Copying" progress dialog to go away, and it's done. Standard Mac user experience; nothing to see here, folks — unless you were expecting a Windows-style "Twenty Questions" installation.

I decided to do a really simple, trivial first exercise: select the five posts I'd written so far in a tutorial series; add the keyword ("label" in Blogger.com parlance) tutorial to them; save them back to Blogger. No animals were harmed in the performance of this experiment, and very explicitly, no content was directly, intentionally edited. (Note the qualifiers; they're important.)

(insert "train wreck" sound effects here.)

The first two parts were (relatively) unmolested; they didn't have any code blocks in them. The latter three did, however, and those were completely deformed. Numerous span elements were added, particularly around links (MacJournal seems to think links shouldn't be underlined, ever). Other formatting was changed; in particular, code tags were replaced by spans that set the font size to 13 points.

WTF?

It's going to take me a bit of noodling around in the software to figure out how to change the defaults to something that makes sense (at least for me), and until then, I'm back to editing in the browser. If the evaluation period expires before I'm happy with the configuration, then I'll comply with the license and blow it off my system. I'd really rather not do that; the feature list looks good, the interface is clean, and best of all, I don't have this Could not contact blogger.com line underneath my editing area as I type.

I understand that MacJournal, like most apps, has default ways of laying things out and working with things. I'm well aware of the difference between an "import" and a "copy" of something. But... I believe very strongly that the first rule of software, as medicine, should be "First, do no harm" — and that includes "don't mess with my formatting without even putting up a confirmation dialog asking my permission!" I really don't think that's too much to ask, or too hard to implement — and doing so would a) make a much more positive initial user experience by b) showing that you've thought things through well enough that c) your still-potential user isn't looking at an hour or two of careful, detailed work just to get back to where he was before he touched your product &emdash; or, rather, it touched his work.

Like most Mac users, I've gotten spoiled by how well most software on this platform is thought through to the tiniest details. Like most, I get annoyed when I have to deal with Windows or Linux apps that simply aren't thought through at all, apparently. (Spend a week with Microsoft Office or, even better, Apple iWork on the Mac; I dare you to go back to Office on Windows and be happy with it.) To run into a Mac app that fails such a simple use case so spectacularly (granted, in its default configuration) simply beggars explanation.

Tuesday, 4 August 2009

The Debate between Adequacy and Excellence

I was clicking through my various feeds hooked into NetNewsWire, in this case The Apple Core column on ZDNet, when I came across this item, where the writer nicely summed up the perfectly understandable strategy Microsoft have always chosen and compared that with Apple and the Mac. Go read the original article (on Better Living without MS Office and then read the comment.

As I've commented on numerous times in this blog and elsewhere (notably here), I'm both very partial to open standards (meaning open data formats, but usually expressed in open source implementations) and to the Apple Mac. As I've said before, and as the experience of many, many users I've supported on all three platforms bears out, the Mac lets you get more done, with less effort and irritation along the way, than either Windows or Linux as both are presently constructed.

But the first two paragraphs of this guy's comment (and I'm sorry that the antispam measures on ZDNet apparently don't permit me to credit the author properly) made me sit up and take notice, because they are a great summation of how I currently feel about the competing systems:

The Macs vs. PC debate has been going on for about 25 years or so, but the underlying debate is much older. What we are really discussing is the difference between adequacy and excellence. While I doubt I would want to be friends with Frank Lloyd Wright or Steve Jobs, both represent the exciting belief in what is possible. While Bill Gates and Steve Ballmer rake in billions, their relative impact on the world of ideas is miniscule.

Bill Gates understands that business managers are on the whole are a practical, albeit uninspired and short-sighted bunch. By positioning Microsoft early on to ride into the enterprise with the implicit endorsement of one of the biggest, longest-lived, and influential suppliers of business equipment, Gates was able to secure Microsoft's future. Microsoft's goal has never seemed to me to be to change the world, only to provide a service that adequately meets business needs. Microsoft has also shown from early on a keen awareness that once you get people to use your product, your primary goal is not to innovate to keep your customers, but, rather to make leaving seem painful and even scary. Many companies do this, but Microsoft has refined this practice into an art.

He then expands on this theme for four more paragraphs, closing with

Practically speaking Microsoft is here to stay. But I am glad that Apple is still around to keep the computer from becoming dreary, to inspire people to take creative risks, to express themselves, and to embrace the idea that every day objects, even appliances like the computer, can be more than just the sum of their functions.

Aux barricades! it may or may not be, depending on your existing preferences and prejudices. But it does nicely sum up, more effectively and efficiently than I have been able to of late, the reasons why Apple is important as a force in the technology business. Not that Microsoft is under imminent threat of losing their lifeblood to Apple; their different ways of looking at the world and at the marketplace work against that more effectively than any regulator could. But the idea that excellence is and should be a goal in and of itself, that humanity has a moral obligation to "continually [reach] well past our grasp", should stir passion in anyone with a functioning imagination. Sure, Microsoft have a commanding lead in businesses, especially larger ones — though Apple's value proposition has become much better there in the last ten years or so; it's hard to fight the installed base, especially with an entrenched herd mentality among managers. But, we would argue, that does not argue that Apple have failed, any more than the small number of buildings designed by Frank Lloyd Wright and his direct professional disciples argue for his irrelevance in architecture. If nobody pushes the envelope, if nobody makes a habit of reaching beyond his grasp, how will the human condition ever improve? For as Shaw wrote,

The reasonable man adapts himself to the world. The unreasonable man persists in trying to adapt the world to himself. All progress, therefore, depends upon the unreasonable man.

And that has been one of my favourite quotes for many years now.

Wednesday, 8 July 2009

The Best Tool for the Job

One of the nice things about growing up around (almost exclusively) men who were master mechanics, carpenters or other such highly skilled tradesmen was that I developed an appreciation both for "the best tool for the job at hand" and "making do with what's available" — and whichever of these applied, accomplishing the task at hand to the best of anyone's ability.

As I've progressed through my software and Web career, I've become highly opinionated about the tools I use, just like any other experienced software craftsperson I've ever known. You and I might use different tools to accomplish what functionally is the same task, but so long as we each have practical, experiential bases for those preferences, we should just go ahead and get what needs doing done. (There's an argument in there for open standards as a requisite for that to happen, but that's another post.)

Too many people who should know better have religious-level devotion to or hostility towards certain companies and/or products. Yes, that includes me; I know I've said some pretty inflammatory things, usually when I felt someone was expressing a religious belief masked as a technical opinion. No doubt they've felt the same about me and any others who were incautious enough to oppose their evangelism (or reactionism, depending on the circumstances). In general, it should be pretty evident to everyone with a personal or professional involvement in IT or personal electronics that trends are driven as much by "what I say three times is true!" as what actually can be shown to be true. That's how mediocre-at-best products become "industry Leaders"; inertia and close-mindedness set in, reinforced by a well-funded, continuous and strident marketing/branding campaign.

I was having a discussion about this online recently, with a former associate who's long had me pegged as an ABMer ("Anything but Microsoft"). I can understand how he formed that opinion; I've long complained about the (innumerable) defects in the "market-leading" operating system, and about how slowly progress has been made in cleaning up the most egregious faults (such as security). But I've also worked at Microsoft in Redmond — three different times — and I've always been impressed by the number of truly gifted people working there. They've had their triumphs and tragedies (anyone used Microsoft Bob lately?). They've had to deal with widely differing process and management effectiveness as they transfer between or liaise with different groups. They've ignored a lot of what has been done outside the company, but they've also created some amazing things inside; too many of which unfortunately never make it into public products.

And the quality of their work product varies as much as any of the factors that go into it. Cases in point: compare, say, Windows Vista with Windows Mobile or the XBox; compare Microsoft Outlook (forever known as "Lookout!" to security/admin people) with Entourage; compare Word for Windows to Word for the Mac — what I understand is a completely different code base (and visibly so) that "just happens" to be able to flawlessly read and write documents shared with Word for Windows.

I also reread a blog post I wrote last December where I detailed the issues I was starting to have with Apple's own Mail app for the Mac. I have a mail store that's hovered somewhere above 2 GB for the last year. I receive 100-200 legitimate emails per day (and up to 700 spams). I presently have over 230 filtering rules defined for how to handle all that mail. Those rules have been built up over the last five years or so — first using Mozilla Thunderbird, then Apple's Mail.app, and now a new system; a progression that also speaks eloquently about the value of open standards. I have never, to my knowledge, lost a saved message whilst transferring from one package to its successor. The few hiccups each transition has had with filtering rules have all been relatively easy to find and fix, with the newest app making that process breathtakingly simple.

The new mail app? As you've no doubt guessed, Microsoft Entourage. It, like every other Mac app I've ever used, Just Works as expected (at least until you get out to the far, bleeding edges). If Microsoft made Windows and Office for Windows as well as they make Entourage (and the rest of their Office:Mac products), they really wouldn't have to worry about competition — and they'd richly deserve that. The market-friendly price for their Mac product (where their major, worthy competitor sells for US$79) is just icing on the cake.

I don't hate Microsoft. I just wish they would stick to what they do as well or better than anyone else, and leave the crappy products that can never be anything but hypersonic train wrecks — like Windows and Internet Exploder. I wish that ever more fervently every time I'm asked to help some hapless Windows usee fix "why my computer doesn't work". That would also make Microsoft's long-suffering stockholders — including current employees, former employees and myself, among others — feel a lot better.

Wednesday, 27 May 2009

News Flash: Microsoft Reinvents Eiffel, 18 Years On

One of the major influences on the middle third of my career thus far was Bertrand Meyer's Eiffel programming language and its concept of design by contract. With such tools, for the first time (at least as far as I was aware), entire classes of software defects could be reliably detected at run time (dynamic checking) and/or at compile time (static checking). I worked on a couple of significant project teams in the mid- to late '90s that used Eiffel quite successfully. Further, it impacted my working style in other languages; for several years, I had a reputation on C and C++ projects for putting far more assert statements than was considered usual by my colleagues. More importantly, it made me start thinking in a different way about how to create working code. Later, as I became aware of automated testing, continuous integration and what is now called agile development, they were all logical extensions of the principles I had already adopted.

This all happened over a period of 15 or so years, in a field where anyone with more than 2 or 3 years' experience is considered "senior". But for me, and most other serious practitioners who I knew and worked with, two to three years was really just about as long as it took to answer more questions than we raised. That, in most crafts, is considered one of the signs of becoming a journeyman rather than a wet-behind-the-ears apprentice.

Then, a few hours ago, I was reading a blog entry by one David R. Heffelfinger which mentioned a project at Microsoft DevLabs called "SmallBasic". Another project that the same organization developed is called "Code Contracts"; there's a nice little set of tools (which will be built into the upcoming Visual Studio 2010 product), and a nice introductory video. Watch the video (you'll need Silverlight to view it), and then do some research on Eiffel and design-by-contract and so on, and it's very difficult not to see the similarities.

So, on the one hand, I'm glad that .NET developers will finally be getting support for 20-year-old concepts (by the time significant numbers of developers use VS 2010 and .NET 4.0). Anything that helps improve the developer and user experiences on Windows (or, in fact, any platform) is by definition a Good Thing™.

On the other hand, I see more evidence of Microsoft's historical Not Invented Here mentality; beating the drum for "new and wonderful ideas for Windows development" that developers on other platforms have been using effectively for some time. While the Code Contracts project indirectly credits Eiffel — the FAQ page links to Spec# at Microsoft Research, which lists Eiffel as one of its influences — it would have been nice to see acknowledgement and explanation of precursor techniques be made more explicitly. Failure to do so merely reinforces the wisdom of Santayana as applied to software: "Those who cannot remember the past are condemned to repeat it", as well as "Fanaticism consists in redoubling your efforts when you have forgotten your aim." This last is something that we who wish to improve our craft would do well to remember.

What do you all think?

Wednesday, 6 May 2009

Professionalism, Web development, and giving oxy to morons

Whereas a poor craftsman will blame his tools, poor tools will handicap even the most skilled craftsman.

As I insinuated in my previous post, I'm getting up to speed on the Zend Framework, the "900-kg elephant" of PHP application frameworks.

One major bone I have to pick with the ZF team is with regard to documentation: each time I've checked the site in the last couple of months, there's been an apparently current HTML version (now clocking in at some 300 HTML pages). There is also a PDF version, the promise of which is used as an enticement to register for their content distribution network (and, presumably, marketing info). As of this moment, however, the framework is at version 1.8.0, but the PDF version of the programmer's reference manual only covers version 1.6.0 (from September, 2008); some 12 releases earlier. It no longer fully matches the actual code, to the point where it is not difficult for a new developer to get deeply confused.

After spending a half-hour browsing the HTML version of the document, I am unable to find any declaration as to which version of the Framework is documented. However, the README.TXT file included with the source distribution states that it covers the 1.8 release, revision 15226, released on April 30, 2009. Classes which are listed in the README as being new, such as Zend_Filter_Encrypt, are documented in the HTML programmer's guide. Establishing a match between the (HTML) doc and the current code is non-trivial, however. While it may be argued that people unfamiliar with browsing a Subversion repository are not likely to be common within Zend's target audience, I would indirectly refute that: a product release, particularly one with a strong industry following, should be

  • properly documented;
  • easy for a (prospective) user to verify that he has the complete package; and
  • with a definite, intuitive learning curve.
In my view, the Zend Framework fails on at least two of these points. The assertion within large segments of the PHP community that it is the "gold standard" of PHP application frameworks should be a disturbing, cautionary omen: if Web development, particularly PHP development, wishes to be taken seriously by the software industry at large, then some major improvements and attitude shifts need to occur quickly, publicly and effectively. It is still far too easy for potential developers outside the "early-adopter" leading edge to scoff that PHP development (and, by extension, Web development as a whole) is still far too immature and amateurish to be taken seriously. As someone who has developed professionally in PHP for some ten years now, that is a disturbing state of affairs; one that I would love to see (and participate in) a free-ranging discussion of.

Tuesday, 23 December 2008

Maybe not eating 'crow', specifically, but..... DUCK!!!

As in, "bend over, here it comes again..."

One of the things I have greatly appreciated about the Mac, especially with OS X, is how simple and straightforward software management is, compared to Linux and especially Windows (where every system change is a death-defying adventure against great odds). Operating system or Apple-supplied apps need an update? Software Update is as painless as it gets: the defaults Just Work in proper Mac fashion, but you can set your own schedule, along with a few other options. There is a well-established convention for third-party apps to check for updates via a Web service "phoning home" at app startup; this has been very easy to deal with. Application and file layout is regular and sensible; libraries and resources are generally grouped in bundles at the system or user level. After a few years of DLL hell in Windows and library mix-and-match in Linux, this was shaping up to be a real pleasure.

Then, as some of you know, I updated Mac OS X on my iMac from 10.5.5 to 10.5.6. As expected, that apparently went as smooth as glass. I even blogged about it. XCode worked; MS Office 2008 for the Mac worked; Komodo Edit worked; all my IM clients worked; all seemed customarily wonderful in the omniverse. I even started up Mail; it opened normally and happily downloaded my regular mail and Google mail, just as it had done every day for months. (I didn't actually open any messages then; that will turn out to be important.) Satisfied that everything Just Worked as always, I went back to working on a project for a few hours before turning in for the night.

Next morning, I went through the usual routine. Awake the Mac from hibernation; log in; start Yahoo, MSN and Skype; start Mail; open Komodo; open Web browsers (Safari, Opera and Camino) and I'm ready to get started. First thing...here's an interesting-sounding email message; let's open that up and... *POOF* — Mail crashes.

WTF? It started up just fine; I even got the "Message for you, Sir" Monty Python WAV I'd set Mail to use as my new-mail-received notification. I start Mail again. Picking a different message, I double-click it in the inbox. A window frame opens with the message title, sits empty for a few hundred milliseconds, then Mail goes away again. Absolutely, totally repeatable. Reboot changes nothing. Safe Boot (the Mac equivalent of Windows' "safe mode") changes nothing. The cold fingers of panic stroke my ribs like Glenn Gould at the piano. On a bad-karma scale of 0 to 10, initial reaction is an "O my God"; we're not dead, but we're hurt bad; the karma has definitely run over the dogma. 

The next couple of days are spent using my ISP's Webmail service, and a set of Python scripts I'd previously written to search mailbox contents — Apple Mail, like any sensible email program, adheres to established standard formats. If I'd been using Microsoft Lookout! in a similar situation, I'd have been up the creek.

Finally, I come across some Web-forum items that indicate that GPGMail needs to be updated; if it's not, Mail will crash under OS X 10.5.6 — which is exactly what was happening. (If you're not using GPGMail, GNU Privacy Guard, or any of the various GPG interfaces for Windows such as Enigmail for Mozilla Thunderbird, you don't know how many people are recording and/or reading your email — but if it transits a server in the US or UK, it's guaranteed that it will be.

Installing the upgraded GPGMail bundle was the work of less than two minutes (hint: remove or rename the old bundle before copying the new one over. You probably don't need the insurance, but consider how we got here...). Then start up Mail as usual. It should, once again, Just Work — complete with being able to read and reply to messages, with or without GPG signatures.

OK, so what lessons can we take away from this experience, both as users and developers?

Time Machine may well be the single most rave-worthy piece of software I've touched in 30 years, but it can't (obviously, easily) do everything, and in a crisis, even experienced users may well not want to risk bringing too much (or too little) "back from history". There's definitely a market for addons to TM to do things like "look in my user and system library directories, the Application directory structure, MacPorts, etc., and bring application Foo back to the state it was in last Tuesday morning, but leave my data files as they are." I almost certainly could do that with the bare interface -- but, especially since it was "broken" as part of an OS upgrade, and (with the Windows/Linux experience fresh in mind) not comfortable exploring hidden dependencies... I was without my main email system for three days. Sure, I had workarounds -- that I wouldn't have had if I'd been in a stock Windows situation -- but that's not really the point, is it?

Also, app developers (Mac or other), add this to your "best practices" list: If your software uses any sort of plug-in/add-on architecture, where modules are developed independently of the main app, then you can have dependency issues. The API you make available to your plugin developers will change over time (or your application will stagnate); if you make it easy for them (and your users) to deal with your latest update, you'll be more successful. There's (at least) two ways to go about doing this:

The traditional "brute force" approach. Have a call you can use to tell plugins what version of the app is running, and allow them to declare whether or not they're compatible with that version. Notify the user about any that don't like your new version. For examples of this, see the way Firefox and friends deal with their plugins. Yes, it works, but it's not very flexible; a new version may come out that doesn't in fact modify any of the APIs you care about — which means that the plugin should work even though it was developed against version 2.4 of your app and you're now on 4.2.

Alternatively, a more fine-grained approach. Group your API into smaller, functional service areas (such as, say, address-book interface or encryption services for an email program). Have your plug-in API support a conversational approach.

  1. The app calls into the plugin, asking it which services it needs and what versions of each it supports.
  2. The app parses the list it gets back from the plugin. If the app version is later than the supported range for a specific feature identified by the plugin, add that to a "possibly unsupported" list. (If the app version is earlier than the range supported by the plugin, assume that it's not supported and go on to check the next one.)
  3. If the "possibly unsupported" plugin list is empty, go ahead and continue bringing up the app, loading the plugins normally; you're done with this checklist.
  4. For each item in the "possibly unsupported" list, determine whether the API for each feature required for the plugin has changed since the plugin was explicitly supported. (This is how a plugin for an earlier release, say 2.4, could work just fine with a later version, like 4.2.) If there's no change in the APIs of each feature required by the plugin, remove that plugin from the "possibly unsupported" list.
  5. If any plugins remain in the list, check if there's an updated version of that plugin on the Net. This might be done using a simple web-service-to-database-query on your Web server. If your Web server knows of an update, ask the user for permission to install it. If the user declines, or no upgrade is available, unload the plugin. (You'll check again next time the app is started; maybe there's an update by then.)
  6. Once the status of each plugin has been established, and compatible plugins loaded, finish starting up your app.

Of course, there are various obvious optimisations and convenience features that can be built into this. Any presentation to the user can and likely should be aggregated; "here's a list of the plugins that I wasn't able to load and couldn't find updates for." Firefox and friends are a good open-source example of this. The checks for plugin updates can also be scheduled, so as not to slow down every app startup. This might be daily, weekly, twice a month, whatever; the important thing is to let the user configure that schedule and view a list of plugins that are installed but not active.

As I started this post by saying, I've been very favorably impressed by Mac apps' ease of use (including installation and maintenance). Mail fell down and couldn't get up again without outside assistance; this is unusual. The fact that this was caused by a plugin and that Mail could not detect and work around the conflict just amazes me; I expect more from Apple. I'm not ready to decrease my use of the Mac because this happened — but I am going to pay more attention to how things work under the hood. The fact that I have to even be aware of this -- which is one of the features that hitherto distinguished the Mac from the grubbier Windows and Linux alternatives -- is worrisome.

Again, your comments are welcome.

Tuesday, 16 December 2008

Happy Updating....

If you're a Windows usee with a few years' experience, you've encountered the rare, monumental and monolithic Service Packs that Micorosoft release on an intermittent basis (as one writer put it, "once every blue moon that falls on a Patch Tuesday"). They're almost always rollups of a large number of security patches, with more added besides. Rarely, with the notable (and very welcome at the time) exception of Windows XP Service Pack 2, is significant user-visible functionality added. Now that SP3 has been out for seven months or so, it's interesting to see how many individuals and businesses (especially SMEs) haven't updated to it yet. While I understand, from direct personal experience, the uncertainty of "do I trust this not to break anything major?" (that is, "anything I use and care about?"), I have always advised installing major updates (and all security updates) as quickly as practical. Given the fact that there will always be more gaping insecurities in Windows, closing all the barn doors that you can just seems the most prudent course of action.

I got to thinking about this a few minutes ago, while working merrily away on my iMac. Software Update, the Mac equivalent of Windows' Microsoft Update, popped up, notifying me that it had downloaded the update for Mac OS X 10.5.6, and did I want to install it now? I agreed, typed my password when requested (to accept that a potentially system-altering event was about to take place, and approve the action), and three minutes later, I was logged in and working again.

Why is this blogworthy? Let's go back and look at the comparison again. In effect, this was Service Pack 6 for Mac OS X 10.5. Bear in mind that 10.5.5 was released precisely three months before the latest update, and 10.5.0 was released on 26 October 2007, just under 14 months ago. "Switchers" from Windows to Mac quickly become accustomed to a more pro-active yet gentle and predictable update schedule than their Windows counterparts. The vast majority of Mac users whom I've spoken with share my experience of never having had an update visibly break a previously working system. This cannot be said for Redmond's consumers; witness the flurry of application and driver updates that directly follow Windows service packs. XP SP2, as necessary and useful as it was, broke more systems than I or several colleagues can remember any single service pack doing previously...by changing behavior that those programs had taken advantage of or worked around. Again, the typical Mac customer doesn't have that kind of experience. Things that work, just tend to stay working.

Contrast this with Linux systems, where almost every day seems to bring updates to one group of packages or another, and distributions vary wildly in the amount of attention paid to integrating the disparate packages, or at least ensuring that they don't step on each other. Some recent releases have greatly improved things, but that's another blog entry. Linux has historically assumed that there is reasonably competent management of an installed system, and offers resources sufficient for almost anyone to become so. Again, recent releases make this much easier.

Windows, on the other hand, essentially requires a knowledgeable, properly-equipped and -staffed support team to keep the system working with a minimum of trouble; the great marketing triumph of Microsoft has been to both convince consumers that "arcane" knowledge is unnecessary while simultaneously encouraging the "I'm too dumb to know anything about computers" mentality — from people who still pony up for the next hit on the crack pipe. Show me another consumer product that disrespects its paying customers to that degree without going belly-up faster than you can say "customer service". It's a regular software Stockholm syndrome.

The truth will set you free, an old saying tells us. Free Software proponents (contrast with open source software) like to talk about "free as in speech" and "free as in beer". Personally, after over ten years of Linux and twenty of Windows, I'm much more attracted by a different freedom: the freedom to use the computer as a tool to do interesting things and/or have interesting experiences, without having to worry overmuch about any runes and incantations needed to keep it that way.

Saturday, 15 September 2007

Crap Is Not a Professional Goal

I recently, very briefly, worked with a Web startup based in Beijing (the "Startup"). The CEO of this company, an apparently very intelligent, focussed individual with great talent motivating sales and marketing people, takes as his guiding principle a quote from Guy Kawasaki, "Don't worry, be crappy".

The problem with that approach for the Startup, as I see it, is two-fold. To start, Kawasaki makes clear in his commentary that he's referring strictly to products that are truly innovative breakthroughs, exemplars of a whole new way of looking at some part of the world. Very few products or companies meet that standard (and even if yours does, Kawasaki declares, you should eliminate the crappiness with all possible speed). No matter how simultaneously useful and geeky the service offered by the Startup's site is, it is, at best, a novel and useful twist resting on several existing, innovative-in-their-day technologies. (A friend who I explained the company to commented that "it sounds as innovative as if Amazon.com only sold romance novels within the city limits of Boston" — hardly a breakthrough concept). Indeed, Kawasaki makes clear that he's talking about order-of-magnitude innovation; the examples he cites are the jump from daisy-wheel to laser printing and the Apple Macintosh.

The second, more insidious, problem with the approach, and the trap that many early-1990s Silicon Valley startups fell into, is that you take crappiness as a given, without even trying to deliver the one-two punch of true innovation and a sublimely well-engineered product that immediately raises the bar for would-be "me-too" copycats. (Sony, for instance, has traditionally excelled at this, as has the Apple iPod (learning from the mistakes Kawasaki cites for the earliest Macintosh). Deliver crap, and anybody can compete with you once they understand the basics of your product. The wireless mouse is a good example of this.

If you tell yourself from the get-go that you'll be satisfied if you ship a 70%-quality product, what will happen is that, as time goes by, that magical 70% becomes 50%, then 30%, then whatever it takes to meet the date you told the investors. And if management doesn't trust engineering to give honest, realistic estimates (as is typical in software and pandemic in startups), you have a recipe for disaster: engineering takes a month to come back with an estimate that development will take 12-18 months; management hears '12' and automatically cuts that to 6 and pushes to have a "beta" out in 4. The problem is that, if you're dealing with an even marginally innovative product, things are not cut-and-dried; the engineers will have misunderstood some aspects of the situation, underestimated certain risks, and been completely blind to others. This was pithily summed up, in another field entirely, by Donald Rumsfeld:

There are known "knowns." There are things we know that we know. There are known unknowns. That is to say there are things that we now know we don't know. But there are also unknown unknowns. There are things we don't know we don't know. So when we do the best we can and we pull all this information together, and we then say well that's basically what we see as the situation, that is really only the known knowns and the known unknowns. And each year, we discover a few more of those unknown unknowns.

Companies that simultaneously forget the ramifications of this while taking too puffed-up a view of themselves are leaving themselves vulnerable to delivering nothing more useful or profitable than the old pets.com (not the current PetSmart) sock puppet — and probably nothing that memorable, either.

And that further assumes that they don't fall prey to preventable disasters like losing the only hard drive running a production server built with non-production, undocumented software. If Web presence defines a "Web 2.0" company in the eyes of its customers and investors, going dark could be very costly indeed.

Sunday, 26 March 2006

On the importance of keeping current

Now that PHP 6 is in the works, there is even less excuse than existed previously for Web sites (hosting providers in particular) not migrating to PHP 5 from PHP 4. We are faced with the unpleasant possibility for tool and library developers of having to support three major, necessarily incompatible, versions of PHP.

I am not yet up to speed on what PHP 6 is going to bring to the table, but PHP 5 (which will be two years old on 13 July 2006) makes PHP a much more pleasant, usable language for projects large and small. With a true object model, access control, exception handling, improved database support, improved XML support, proper security design concepts, and so on, it's a far cry from the revised-nearly-to-the-point-of-absurdity PHP 4.

Another great thing about PHP 5, if not strictly part of it, is the PHPUnit unit testing framework (see also the distribution blog). This is a wonderful tool for unit testing, refactoring, and continuous automated verification of your codebase. It will strongly encourage you to make your development process more agile, using a test first/test everything/test always mindset that, once you have crossed the chasm, will benefit a small one- or two-man shop at least as much as the large, battalion-strength corporate development teams that have to date been its most enthusiastic audience.

I have so far used this tool and technique for three customer projects: the first was delivered (admittedly barely) on time, the second was actually deliverable less than 2/3 of the scheduled calendar time into the project (allowing for further refactoring to improve performance) and delivered on time, and the third was delivered 10% ahead of time, with no heroic kill-the-last-bug all-night sessions required.

Discussing the technique with other developers regarding its use in PHP and other languages (such as Python, Ruby, C++ and of course Java; the seminal "JUnit" testing framework was written for Java), gives the impression that this experience is by no means unique or extreme (nor did I expect it to be). Given that two of my three major career interests for the last couple of decades have been rapid development of high-quality code and the advancement of practices and techniques to help our software-development craft evolve towards a true engineering discipline, this would seem a natural thing for me to get excited and evangelical about. (The third, in case you're wondering, is the pervasive use of open standards and non-proprietary technologies to help focus efforts on true innovation).

All of this may seem a truly geeky thing to rave about, and to a certain degree, I plead guilty of that. But it should also be important, or at least noteworthy, to anybody whose business or casual interests involve the use of software or software-controlled artifacts like elevators and TiVo. By understanding a little bit about how process and quality interact, clients, customers and the general-user public can help prod the industry towards continuous improvement.

Because, after all, "blue screens" don't "just happen".

Tuesday, 18 October 2005

A Lever, or a toothpick?

Any modern development effort which is complex enough to be commercially and/or technically interesting requires active, continuous collaboration between professionals and craftsfolk of various disciplines and specialisations. For instance, most organisations developing computer software have, in addition to the designers and coders of the software itself, several other interested stakeholders: quality engineers, documentation authors and editors, sales and marketing specialists, and various flavours of managers. Each of these individuals and groups have different capabilities and roles with regard to the project being developed, each of the groups have different perspectives, different needs — but one need that all share, knowingly or not, is the ability to communicate effectively and efficiently with each other. This involves the creation or acquisition of information, its refinement, analysis, discussion and use within the context of the project. The end goal, of course, is the completion and delivery of some sort of artifact that meets the needs of the organisation and delights that product's customers, without sending the development organisation on a death march in the process.

"But wait", you might reasonably say, "we already do this. We have meetings, minutes are taken, transcribed and emailed about, lots of other emails get sent back and forth, we have documents like functional specifications and design documents and whatnot to keep ourselves organised — what do we need all this gimcrackery for?" All of which is perfectly true, just as you can put harness and bit on your horse, hitch up a carriage, and travel from Kuala Lumpur to Singapore — or you could catch an airline flight instead. There are countless organisations, and a depressingly high proportion of smaller ones, who continue to solve earl-21st-century problems with early-20th-century tools and practices. We know better. One of the points that many leading authorities, such as Steve Maguire in his excellent book Debugging the Development Process, make is that for each hour of meetings which a knowledge worker attends, it takes at least another hour for him or her to regain the level of productivity in work product creation which would have been effected had the meeting not taken place. So, for a typical large-corporate developer who spends an hour every day in meetings that could have their purpose accomplished through less intrusive means, the company is taking a 25% hit in productivity for that individual. Take 1/4 of the payroll of, say, Maxis, or even my own company Cilix, and that starts to add up.