Showing posts with label defective. Show all posts
Showing posts with label defective. Show all posts

Saturday, 14 July 2012

Getting Your Stuff Done, or Stuff Done To You

This is the response I wanted to leave to "MrChimei" on the spot-on YouTube video, "Steve Jobs Vs. Steve Ballmer". Since YouTube has such a tiny (but understandable) limit on comment size, a proper response would not fit. Therefore...


Let me put it this way. It doesn't matter whether you're speaking out of limited experience, or limited cognition, or what; your flippant attitude will not survive first contact with reality (to paraphrase von Moltke).

I'm a Windows developer who's been developing for Windows since Windows 1.0 was in early developer beta, on up to Windows 8. I had nearly ten years professional development experience on five platforms before I ever touched Windows. I had three stints at Microsoft back when that was cool, and sold most of my stock when it was still worth something.

I've also supported users of Windows and various operating systems, from groups of 3-5 small businesspeople on up to being comfortably high in the operational support pecking order in a Fortune 100 company. I've seen what helps and doesn't help intelligent non-geeks get their work done.

Both in that position, and in my own current work, I've observed and experienced order-of-magnitude-or-better differences in productivity, usability, reliability, supportability… all in Apple's favour. I've worked with and for people who became statistics junkies out of an emotional imperative to "prove" Windows better, in any way, than other systems. The next such individual I meet who succeeds, out of a sample of over 20 to date, will be the very first.

In 25 years, I have never experienced a Windows desktop machine that stayed up and fully functional for more than approximately 72 hours, *including* at Redmond, prior to a lightly-loaded Windows 7 system.

In the last 6 years of using Macs and clones half-time or better, I have never experienced a Mac that failed to stay up and working for less than a week. In the last five years, my notes show, I've had two occasions where a hard reset to the Mac I'm typing this on was necessary; both turned out to be hardware faults. Prior to Windows 7, any Windows PC that did not need to be hard-rebooted twice in a given fortnight was a rarity. Windows 7 stretched that out to 6 weeks, making it by far the most stable operating system Microsoft have shipped since Windows NT 3.51. (Which I will happily rave about at length to any who remember it.)

For many years, I too was a Windows bigot. The fact that Unix, then OS/2, then Mac OS had numerous benefits not available in Windows was completely beneath my attention threshold. The idea that (on average over a ten-year period) some 30% of my time seated at a Windows PC was devoted to something other than demonstrably useful or interesting activity was something that I, like the millions of others bombarded by Ziff-Davis and other Microsoft propaganda organs, took as the natural order of things.

Then I noticed that Mac users were having more fun. "Fine," I thought, "a toy should bring amusement above all." Then I noticed that they were getting more and better work done. "Well," I said to myself, "they're paying enough extra for it; they should get some return on their investment. I'm doing well enough as is."

And then, within the space of less than a year, all five of my Windows systems were damaged through outside attack. "Why," I asked. "I've kept my antivirus current. I've installed anti-spyware and a personal firewall in addition to the (consumer-grade) router and firewall connecting me to the Internet. I don't browse pr0n or known-dodgy sites. I apply all security patches as soon as they're released. Why am I going to lose this development contract for lack of usable systems?"

I discovered a nasty little secret: it's technically impossible to fully protect a Windows PC from attacks, using tools that a reasonably-bright eight-year-old can master in a Saturday afternoon. People responsible for keeping Windows PCs have known this for over a decade; it's why the more clueful ones talk about risk mitigation than prevention, with multi-layered recovery plans in place and tested rather than leaving all to chance. For as long as DSL and cable Internet connections have been available, it's taken less time to break into a new, "virgin" Windows PC than to fully patch and protect it against all currently-likely threats.

People used to think that using cocaine or smoking tobacco was healthy for you, too.

What I appreciate most about the Mac is that, no matter what, I can sit down in front of one and in a minute or less, be doing useful, interesting work. I don't have the instability of Windows. I don't have the sense that I'm using something that was designed for a completely different environment, as Windows too closely resembles the pre-network (let alone pre-Internet) use of isolated personal computers. Above all, I appreciate the consistency and usability that let me almost forget about the tools I'm using to work with my data, or with data out in the world somewhere, than on what I'm trying to accomplish.

One system treats its users as customers, whose time, efficiency and comfort are important and who know they have choices if they become dissatisfied. The other platform treats its users as inmates, who aren't going to leave no matter what... and if that's true, then quality sensibly takes a back seat to profitability.

Which would you recommend to your best friend? Or even to a respected enemy?

Tuesday, 4 January 2011

Windows? Check. Skype? Check. Quality? Oops…

Several cultures share variations of a proverb which reminds us that you cannot build a house on sand and have it endure the test of time — or even the next tide.

Skype learned that the hard way a couple of weeks ago. Like all peer-to-peer, or P2P, networks, Skype relies on (a varying subset of) its users' systems to handle routine network-housekeeping chores, including routing. Fine; that's the way many applications have done things for more than a decade. Skype have been living unusually close to the edge in two respects: they have an unusually large, widely-distributed number of simultaneous users relative to other P2P networks, and it implicitly trusts the reliability of its Windows-based application and the underlying Windows operating system when performing at any needed scale. This is what sent the waves crashing in upon the sand-borne house.

Microsoft Windows is, inarguably, a wildly successful software system; along with Microsoft's Office set of applications, it is the basis of Microsoft's billions of dollars a year in revenue and profit. But just because something is successful or because it's made by a major corporation doesn't necessarily mean it's well-designed or -built. (See, for example, the Ford Pinto.)

By relying on the (always-questionable) stability and reliability of each of the Windows PCs acting as a "supernode" in its network architecture, Skype was setting itself on a path to eventual inevitable, foreseeable large-scale failure.

Many of the well-known problems with keeping Windows stable and running properly have to do with the ridiculous ease of developing, distributing and using malware; in some countries (such as Singapore), estimated infection rates of Internet-connected Windows PCs routinely hit or exceed 90%. Besides the obvious threat to any "secure" information on those systems (credit card info, login IDs, and so on), those infected Windows systems have a relatively large share of their processing power diverted for the benefit of someone other than the legitimate user or of applications other than those that s/he has authorised.

Another cause of instability and other problems, as with the Skype outage, is that it is very difficult to develop, deploy and maintain even moderately complex applications under Windows with any real means of asserting that the software is reliable and robust. The only way to gain any such real assurance is by testing as many hardware and software combinations with your software as you can, and hoping that you've found all the "most likely" problems. (This is why Microsoft and others employ thousands of software testers, in addition to a massive automated-testing infrastructure.) And, though Microsoft have made some relatively amazing improvements in this area (compare, say, Windows 98 to Windows 7), even they can't catch everything — and their development process and system architecture essentially require them to. Every Windows usee in the last 25 years has seen the "Blue Screen of Death;" that is merely an acknowledgement by Windows that it's completely lost its mind, usually after innumerable random acts of lesser senility.

Skype were bitten by both ends of this issue: a new release of their client application for Windows had defects that were not caught during development or internal testing, and by deploying that defective application on an operating system that provides little-to-no protection against maliciously or inadvertently defective applications, a large "time bomb" started ticking away, unnoticed by all until its spectacular detonation on or around 22 December 2010.

For a Skype client to be a "supernode," it cannot be behind a "firewall" that uses network address translation to isolate its clients from the big, bad Net. (There are ways around that, too, of course; but they're generally very black-hat techniques that don't scale well on a commercial basis.) A client does not have to be running on Windows to be a supernode, but given the demographics of Skype's user base, and the larger PC user base in general, the majority are.

And so, when Skype's defective Windows client ran into mortal difficulties on Windows' defective platform, the ordinary user PCs that were acting as supernodes started dropping like flies. Other Skype clients, as with any proper P2P network, simply rerouted to other supernodes. However, since many (if not most) of the Windows clients were behind NAT firewalls and there were (and are) a relatively small percentage of non-Windows clients available to serve as supernodes, the number of "leaf node" clients chasing the decreasing number of supernodes put the remaining systems under an unsustainable load, and so those Skype clients (with their supernode capabilities) dropped off the network.

Had there been a more heterogenous mix of Skype clients so that those available to perform supernode duties were not almost exclusively Windows PCs, the failure might well have been mitigated if not eliminated altogether. Or, were it practical to develop a (reasonably) guaranteed-stable application on Windows, the defect that initiated the failure cascade might well never have occurred. But by "putting all their eggs in one basket" which was known to have large, gaping holes in it, Skype were guaranteeing that their metaphorical sidewalk would eventually be covered hip-deep in broken eggs.

Monocultures in technology, as in farming, are generally Bad Things; a threat which can attack any member of a monoculture can attack all such members. (Think of the Windows malware-infection rates as an example of this.) Single points of commonality (or of control) are always eventual single points of failure.

The Skype failure could well have been a reminder of these long-known basic truths. The commercial interests involved, however, ensure that such a lesson will go unheeded. It will therefore be repeated; very possibly not with Skype, but with some other aspect of our grossly defective information-technology ecosystem. Think of it as "Unsafe at Any Speed" meeting Groundhog Day until something is done to prevent yet another China Syndrome.

Monday, 17 May 2010

Those were the days, my friend...but they're OVER.

No, I'm not talking about Facebook and the idea that people should have some control over how their own information makes money for people they never imagined. That's another post. This is a shout out to the software folk out there, and the wannabe software folk, who wonder why nobody outside their own self-selecting circles seems to get excited about software anymore.

Back in the early days of personal computers, from 1977 to roughly 2000, hardware capabilities imposed a real upper limit on software performance and capabilities. More and more people bought more and more new computers so they could do more and more stuff that simply wasn't practical on the earlier models. Likewise, whenever any software package managed something revolutionary, or did something in a new way that goosed performance beyond what any competitor could deliver, that got attention...until the next generation of hardware meant that even middle-of-the-road software was "better" (by whatever measure) than the best of what could happen before.

After about 2000, the speed curve that the CPUs had been climbing flattened out quite a lot. CPU vendors like Intel and AMD changed the competition to more efficient designs and multi-core processors. Operating systems – Linux, Mac OS X and even Microsoft Windows – found ways to at least begin to take advantage of the new architectures by farming tasks out to different cores. Development tools went through changes as well; new or "rediscovered" languages like Python and Forth that could handle multiprocessing (usually by multithreading) or parallel processing became popular, and "legacy" languages like C++ and even COBOL now have multithreading libraries and/or extensions.

With a new (or at least revamped) set of tools in the toolbox, we software developers were ready to go forth and deliver great new multi-everything applications. Except, as we found out, there was one stubborn problem.

Most people (and firms) involved in software application development don't have the foggiest idea how to do multithreading efficiently, and even fewer can really wrap their heads around parallel processing. So, to draw an analogy, we have these nice, shiny new Porsche 997 GT3s being driven all over, but very few people know how to get the vehicle out of first gear.

I started thinking about this earlier this evening, as I read an article on Lloyd Chambers' Mac Performance Guide for Digital Photographers & Performance Addicts (pointed to in turn by Jason O'Grady and David Morgenstern's Speedtest article on ZDNet's Apple blog. One bit in Chambers' article particularly caught my eye:

With Photoshop CS4/CS5, there is also increased overhead with 8 cores due to CS5 implementation weakness; it’s just not very smart about knowing how many threads are useful, so it wastes time and memory allocating too many threads for tasks that won’t even benefit from them.

In case you've been stuck in Microsoft Office for Windows for the last 15 years, I'll translate for you: The crown-jewels application for Adobe, one of the oldest and largest non-mainframe software companies in history, has an architecture on its largest-selling target hardware platform that is either too primitive or defective to make efficient use of that platform. This is on a newly released version of a very mature product, that probably sells a similar proportion of Macs to the proportion of Windows PCs justified by Microsoft Office (which is even better on the Mac, by the way). Adobe are also infamous among Mac users and developers for dragging their feet horribly in supporting current technologies; CS5 is the first version of Adobe Creative Suite that even begins to support OS X natively – an OS which has been around since 2001.

I've known some of the guys developing for Adobe, and they're not stupid...even if they believe that their management could give codinghorror.com enough material to take it "all the way to CS50." But I don't recall even Dilbert's Pointy Haired Boss never actually told his team to code stupid, even if he did resort to bribery (PHB announces he'll pay $10 for every bug fix. Wally says, "Hooray! I'm gonna code me a minivan!"...i.e., several thousand [fixed] bugs).

No, Adobe folk aren't stupid, per se; they're just held back by the same forces that hold back too many in our craft...chiefly inertia and consistently being pushed to cut corners. Even (especially?) if you fire the existing team and ship the work off halfway around the world to a bunch of cheaper guys, that's a non-solution to only part of the problem. Consider, as a counterexample, medicine; in particular, open-heart surgery. Let's say that Dr. X develops a refinement to standard technique that dramatically improves success rates and survival lifetimes for heart patients he operates on. He, or a colleague under his direct instruction, will write up a paper or six that get published in professional journals – that nearly everybody connected with his profession reads. More research gets done that confirms the efficacy of his technique, and within a fairly short period of time, that knowledge becomes widespread among heart surgeons. Not (only) because the new kids coming up learned it in school, but because the experienced professionals got trained up as part of their continuing education, required to keep their certification and so on.

What does software development have in common with that – or in fact, with any profession? To qualify as a profession, on the level of the law, accounting, medicine, the military, or so on, a profession is generally seen as

  • Individuals who maintain high standards of ethics and practice;

  • who collectively act as gatekeepers by disciplining or disqualifying individuals who fail to maintain those standards;

  • who have been found through impartial examination to hold sufficient mastery of a defined body of knowledge common to all qualified practitioners, including recent advances;

  • that mastery being retained and renewed by regular continuing education of at least a minimum duration and scope throughout each segment of their careers;

  • the cost of that continuing education being knowingly directly or indirectly subsidised by the professional's clients and/or employer;

  • such that these professionals are recognised by the public at large as being exclusively capable of performing their duties with an assuredly high level of competence, diligence and care;

  • and, in exchange for that exclusivity, give solemn, binding assurances that their professional activities will not in any way contravene the public good.

Whether every doctor (or accountant, or military officer, or lawyer) actually functions at that level at all times is less of a point than that that is what is expected and required; what defines them as being "a breed apart" from the casual hobbyist. For instance, in the US and most First World countries, an electrician or gas-fitter is required to be properly educated and licensed. This didn't happen just because some "activists" decided it was a good idea; it was a direct response to events like the New London School explosion. By contrast, in many other countries (such as Singapore, where I live now), there is no such requirement; it seems that anybody can go buy himself the equipment, buy a business license (so that the government is reassured they'll get their taxes), and he can start plastering stickers all over town with his name, cell phone number and "Electrical Repair" or whatever, and wait for customers to call.

Bringing this back to software, my point is this: there is currently no method to ensure that new knowledge is reliably disseminated among practitioners of what is now the craft of software development. And while true professionals have the legal right to refuse to do something in an unsafe or unethical manner (a civil engineer, for example, cannot sign off on a design that he has reason to believe would endanger public users/bystanders, and without properly-executed declarations by such an engineer, the project may not legally be built). Software developers have no such right. Even as software has a greater role in people's life from one day to the next, the process by which that software is designed, developed and maintained is just as ad hoc as the guy walking around Tampines stuffing advertisements for his brother's aircon-repair service under every door. (We get an amazing amount of such litter here.)

Adobe may eventually get Creative Suite fixed. Perhaps CS10 or even CS7 will deliver twice the performance on an 8-core processor than on a 4-core one. But they won't do it unless they can cost-justify the improvement, and the costs are going to be much higher than they might be in a world where efficient, modern software-development techniques were a professionally-enforced norm.

That world is not the one we live in, at least not today. My recurring fear is that it will take at least one New London fire with loss of life, or a software-induced BP Deep Horizon-class environmental catastrophe, for that to start to happen. And, as always, the first (several) iterations won't get it right – they'll be driven by politics and public opinion, two sources of unshakable opinion which have been proven time and again to be hostile to professional realities.

We need a better way. Now, if not sooner.

Tuesday, 2 February 2010

I thought OSS was supposed to break down walls, not build them.

Silly me. I've only been using, evangelizing and otherwise involved in open source software for 15 or 20 years, so what do I know?

In reaction to the latest feature article in DistroWatch Weekly, I'm angry. I'm sad. Most of all, I feel sorry for those losers who would rather keep their pristine little world free of outside involvement, especially when that involvement is with the express intent of not making it easy for non-übergeeks to use open source software — in this case, OpenBSD. OpenBSD, like its cousins FreeBSD and NetBSD, is one of the (current) "major" base versions of BSD Unix. While the latter two have had numerous other projects that have taken their software as a base, fewer have branched from "Open" BSD, as can be seen in this comparison on Wikipedia. Recently, two OpenBSD enthusiasts have attempted to address this, only to be flamed mercilessly for their trouble.

The DistroWatch feature previously mentioned concerns GNOBSD, a project created by Stefan Rinkes, whose goal plainly was to make this highly stable and secure operating system, with a lineage that long predates Linux, accessible to a wider audience (who don't use Mac OS X - based (indirectly) on both FreeBSD and NetBSD).

For his troubles, Mr. Rinkes was the subject of repeated, extreme, egregiously elitist flaming, this thread being but one example. He was eventually forced to withdraw public access to the GNOBSD disc image, and adding a post stating that he did not "want to be an enemy of the OpenBSD Project."

Reading along the thread on the openbsd-misc mailing list brings one to a post by one "FRLinux", linking to another screaming flamewar summarized here, with the most directly relevant thread starting with a message titled "ComixWall terminated. Another hard worker with the intent of creating an OpenBSD-based operating system that was easy for "ordinary people" to use, promptly incurred the wrath of the leader of the OpenBSD effort, Theo de Raadt, with numerous other "worthies" piling on.

WTF?!?

OK, I get it. The OpenBSD folks don't want anyone playing in their sandbox providing access to the OpenBSD software (itself supposedly completely open source, free software) that might in any way compete with "official" OpenBSD downloads or DVD/CD sales. They especially don't want any possibility of the Great Unwashed Masses™ — i.e., anyone other than self-professed, officially-blessed übergeeks — from playing with "their" toys.

OK, fine. It has been a large part of my business and passion to evangelize open source and otherwise free software as an enabler for the use of open data standards by individuals and businesses. I have supported and/or led the migration, in whole or in part, of several dozen small to midsize businesses from a completely closed, proprietary software stack (usually Microsoft Windows and Microsoft Office for Windows) to a mix of other (open and proprietary) operating systems and applications. OpenBSD has historically been high on my list of candidates for clients' server OS needs; if you can manage a Windows Server cluster with acceptable performance and stability, your life will be easier managing almost anything else — provided that you're not locked in to proprietary Windows-only applications and services. In these days of "cloud" computing and "software as a service", the arguments against Ye Olde Standarde are becoming much more compelling.

I just won't be making those arguments using OpenBSD anymore. And to my two clients who've expressed interest in BSD on the desktop for their "power" developers over the next couple of years, I'll be pitching other solutions to them...and explaining precisely why.

Because, out here in the real world, people don't like dealing with self-indulgent assholes. That's a lesson that took this recovering geek (in the vein of "recovering alcoholic") far too many years to learn. And people especially don't like trusting their business or their personal data to such people if they have any choice at all.

And, last I checked, that was the whole point of open standards, open source, and free software: giving people choices that allow them to exercise whatever degree of control they wish over their computing activities. I'd really hate to think that I've spent roughly half my adult life chasing a myth. Too many people have put too much hard work into far too many projects to let things like this get in our way.

Sunday, 17 January 2010

Fixing A Tool That's Supposed To Help Me Fix My Code; or, Shaving a Small Yak

Well, that's a couple of hours I'd really like to have back.

One of the oldest, simplest, most generally effective debugging tools is some form of logging. Writing output to a file, to a database, or to the console gives the developer a window into the workings of code, as it's being executed.

The long-time "gold standard" for such tools has been the Apache Foundation's log4j package, which supports Java. As with many tools (e.g., PHPUnit), a good Java tool was ported to, or reimplemented in, PHP. log4php is uses the same configuration files and (as much as possible) the same APIs as log4j. However, as PHP and Java are (significantly) different languages, liberties had to be taken in various places. Add to this the fact that PHP has been undergoing significant change in the last couple of years (moving from version 5.2 to the significantly different 5.3 as we wait for the overdue, even more different, 6.0), and a famous warning comes to mind when attempting to use the tool.

Here be dragons.

The devil, the saying goes, is in the details, and several details of log4php are broken in various ways. Of course, not all the breakage gives immediately useful information on how to repair it.

Take, as an example, the helper class method LoggerPatternConverter::spacePad(), reproduced here:

    /**
     * Fast space padding method.
     *
     * @param string    $sbuf      string buffer
     * @param integer   $length    pad length
     *
     * @todo reimplement using PHP string functions
     */
    public function spacePad($sbuf, $length) {
        while($length >= 32) {
          $sbuf .= $GLOBALS['log4php.LoggerPatternConverter.spaces'][5];
          $length -= 32;
        }
        
        for($i = 4; $i >= 0; $i--) {    
            if(($length & (1<<$i)) != 0) {
                $sbuf .= $GLOBALS['log4php.LoggerPatternConverter.spaces'][$i];
            }
        }

        // $sbuf = str_pad($sbuf, $length);
    }

Several serious issues are obvious here, the most egregious of which is acknowledged in the @todo note: "reimplement using PHP string functions." The $GLOBALS item being referenced is initialized at the top of the source file:

$GLOBALS['log4php.LoggerPatternConverter.spaces'] = array(
    " ", // 1 space
    "  ", // 2 spaces
    "    ", // 4 spaces
    "        ", // 8 spaces
    "                ", // 16 spaces
    "                                " ); // 32 spaces

If you feel yourself wanting to vomit upon and/or punch out some spectacularly clueless coder, you have my sympathies.

The crux of the problem is that the function contains seriously invalid code, at least as of PHP 5.3. Worse, the error messages that are given when the bug is exercised are extremely opaque, and a Google search produces absolutely no useful information.

The key, as usual, is in the stack trace emitted after PHP hits a fatal error.

To make a long story short(er), the global can be completely eliminated (it's no longer legal anyway), and the code can be refactored so:

    public function spacePad(&$sbuf, $length) {
        $sbuf .= str_repeat( " ", $length );
    }

Of course, this makes the entire method redundant; the built-in str_repeat() function should be called wherever the method is called.

I'll make that change and submit a patch upstream tomorrow... errr, later today; it's coming up on 1 AM here now.

But at least now I can use log4php to help me trace through the code I was trying to fix when this whole thing blew up in my face.

At least it's open source under a free license.

Monday, 21 December 2009

When Standing on the Shoulders of Giants, Don't Trip

I've been doing a lot of software installation lately, on Mac OS X and various BSDs and Linuxes. Doing so reminds me of one of the major banes of my life when developing for Windows. If you're a Windows usee, you're acutely familiar with the concept.

DLL Hell.

On Windows, as you install a dozen apps, each of them will try to install their own versions of many system-wide (or vendor-wide) libraries. The poster child for these is msvcrt.dll, the Microsoft C Runtime Library, upon which virtually everything in or running under Windows depends in some fashion. Many unhappy man-millenia have been spent by admins and PC owners the world over trying to resolve compatibility issues, usually introduced when a newly-installed app overwrites a more recent version with the earlier version that app shipped with. Other things — including other libraries the app may depend on or even Windows itself — may break since features and bug fixes they rely on (present in later versions of the library) are suddenly gone.

Why think about this, when I'm as likely to write (or even use) any Windows software in the next month as I am to win the local lottery without buying a ticket?

Because, one of the things I added to my new MacBook Pro was the MacPorts software, which is a Mac analogue to the BSD ports collection. The Linux equivalent to this is (more or less) your package manager, whether it be aptitude or yum or portage or whatever. Windows has no equivalent concept; every app you install is its own universe, or thinks it is (with the "benefits" noted earlier), and no central one-stop "update everything" capability.

Modern operating systems may not have DLL hell, but they do tend to have numerous different versions of different libraries and support frameworks installed. Since those can be accessed by apps as either "give me the most recent version" or "give me version x.y.z", no equivalent to DLL hell takes place. But it does tend to eat up disk space. And it makes it harder to mentally keep track of what's installed; installing the MacPorts edition of Bluefish, an HTML editor, installs some eighty other "ports" — direct or indirect dependencies of Bluefish (which itself is primarily a Linux app). Some of these dependencies are last-month recent updates; a few are several years old. But MacPorts determines which packages are needed by Bluefish (and its dependencies, and...) and installs the latest version (by default) of those dependencies, or the specific version asked for. Thus, several files with the same basic name, but different version numbers encoded into the filename, can coexist peacefully in virtually any modern OS.

Recent Microsoft system software, particularly the "managed" software using the .NET Framework, avoids most of the cavalier instances of DLL hell, but introduces other quirks and brittleness in compensation.

But I'm still a bit unnerved by a software package — any package mdash; that has 80+ designated dependencies. I'm certainly thankful that I don't have to manage that dependency matrix myself.

If modern software development is "standing on the shoulders of giants", to appropriate a Newton paraphrase, then careless dependency introduction would be the functional equivalent of drunken tap-dancing: almost impossible not to trip.

Tuesday, 24 June 2008

Browser Support: Why "Internet Explorer 6" Really Is A Typo

(Experienced Web developers know that the correct name for the program is Microsoft Internet Exploder — especially for version 6.)

Case in point: I was browsing the daringfireball.net RSS feed and came across an article on the 37signals blog talking about Apple's new MobileMe service dropping support for IE6. The blog is mostly geared towards 37signals' current and potential clients who, if not Web developers themselves are at least familiar with the major technical issues involved. Not surprisingly, virtually every one of the 65 comments left between 9 and 13 June was enthusiastic in support for the move; not because the commenters necessarily favor Apple (though, clearly, many do), but because anybody who's ever cared about Web standards knows that IE6 is an antediluvian, defiantly defective middle finger thrust violently up the nostril of the Web development community; the technological equivalent of the Chevrolet Corvair: unsafe at any speed.

The degree to which this is true, and to which this truth continues to plague the Web developer and user communities, were brought into sharp focus by three of the comments on the post. The first, from 37signals' Jason Fried, estimates that 38% of their total traffic is IE, of which 31% is IE 6.0 (giving a grand total of 11.8% of total traffic — not huge, but significant).  The second is from Josh Nichols, who points out that Microsoft published a patch to solve the problem with IE6 in January, 2007; he notes, however, that an unknowable number of users may not have applied that patch. Finally, Michael Geary points out that later versions of Internet Explorer (version 7 and possibly the now-in-beta Version 8) also have the problem of not being able to "set cookies for a 2×2 domain, i.e. a two-letter second level domain in a two-letter top level domain with no further subdomain below that," (including his own mg.to blog domain). The fact that relatively few domains fall into that category can be argued to be part of the problem; users running IE, particularly an out-of-date version of IE, are likely to be less experienced, less able to recognize and solve the problem correctly, than to blame it on "something wrong with the Internet". For those people and companies who've paid for those perfectly legitimate domains, the negligence and/or incompetence of the browser supplier and/or user mean that they're not getting their money's worth.  And ICANN, the bureaucracy "managing" the domain-name system, is now "fast-tracking" a proposal to increase the number of top-level domain names (TLDs) used. (In time-honored ICANN custom, the press release is dated 22 June 2008 and "welcome[s]" "Public Comments" "by 23 June 2008." Nothing like transparency and responsiveness in governance, eh?