Showing posts with label free software. Show all posts
Showing posts with label free software. Show all posts

Tuesday, 14 September 2010

Saving Effort and Time is Hard Work

As both my regular readers well know, I've been using a couple of Macs1 as my main systems for some time now. As many (if not most, these days) Web developers do, I run different systems (Windows and a raft of Linuxes) using VMWare Fusion so that I can do various testing activities.

Many Linux distributions come with some form of automation support for installation and updates2. Several of these use the (generally excellent) Red Hat Anaconda installer and its automation scheme, Kickstart. Red Hat and the user community offer copious, free documentation to help the new-to-automation system administrator get started.

If you're doing a relatively plain-vanilla installation, this is trivially easy. After installing a new Fedora system (image), for example, there is a file named anaconda-ks.cfg in the root user's home directory, which can be used to either replay the just-completed installation or as a basis for further customisation. To reuse, save the file to a USB key or to a suitable location on your local network, and you can replay the installation at will.

Customising the installation further, naturally, takes significant additional effort — almost as "significant" as the effort required to do the same level of customisation manually during installation. The saving grace, of course, is that this only needs to be done once for a given version of a given distribution. Some relatively minor tinkering will be needed to move from one version to another (say, Fedora 13 to 14), and an unknowable-in-advance amount of effort needed to adapt the Kickstart configuration to a new, different distribution (such as Mandriva), since packages on offer as well as package names themselves can vary between distros3.

It's almost enough to make me pull both remaining hairs out. For several years, I have had a manual installation procedure for Fedora documented on my internal Wiki. That process, however, leaves something to be desired, mainly because it is an intermixture of manual steps and small shell scripts that install and configure various bits and pieces. Having a fire-and-forget system like Kickstart (that could then be adapted to other distributions as well), is an extremely seductive idea.

It doesn't help that the Kickstart Configurator on Fedora 13, which provides a (relatively) easy-to-use GUI for configuring and specifying the contents of a Kickstart configuration file, works inconsistently. Using the default GNOME desktop environment, one of my two Fedora VMs fails to display the application menu, which is used for tasks such as loading and saving configuration files. Under the alternate (for Fedora) KDE desktop, the menu appears and functions correctly.

One of the things I might get around to eventually is to write an alternative automated-installation-configuration utility. Being able to install a common set of packages across RPM-based (Red Hat-style) Linuxes such as Fedora, Mandriva and CentOS as well as Debian and its derivatives (like Ubuntu), and maybe one or two others besides, would be a Very Handy Thing to Have™.

That is, once I scavenge enough pet-project time to do it, of course. For now, it's back to the nuances of Kickstart.

Footnotes:

1. an iMac and a MacBook Pro. (Return)

2. Microsoft offer this for Windows, of course, but they only support Vista SP1, Windows 7 and Windows Server 2008. No XP. Oops. (Return)

3. Jargon. The term distro is commonly used by Linux professionals and enthusiasts as an abbreviation for "distribution"; a collection of GNU and other tools and applications built on top of the Linux kernel. (Return)

Sunday, 15 August 2010

Where Do We "Go" From Here? D2 Or Not D2, That the Question Is

Unless you've been working in an organization whose mission is to supply C++ language tools (and perhaps particularly if you have, come to think of it), you can't help but have noticed that a lot of people who value productive use of time over esoteric language theory have come to the conclusion that the C++ language is hopelessly broken.

Sure, you can do anything in C++... or at least in C++ as specified. How you do it in any particular C++ implementation varies. That's a large part of the justification for libraries like the (excellent) Boost; to tame the wilderness and present a reasonable set of tools that will (almost certainly) work with any (serious) C++ compiler in the wild.

But, as several language professionals have pointed out, very, very few (if any) people know all there is to know about C++; teams using C++ for projects agree (explicitly or otherwise) on what features will and won't be used. Sometimes, as in the case of many FLOSS projects, this is because compiler support has historically been inconsistent across platforms; more generally, it's to keep the codebase within the reasonable capabilities of whatever teammates poke at it over the life of the project.

I've been using C++ for 20+ years now, and I still learn something — and find I've forgotten something — about C++ on every project I embark on. Talking with other, often very senior, developers indicates that this is by no means an isolated experience. Thus, the quest for other language(s) that can be used in all or most of the numerous areas in which C++ code has been written.

If you're on Windows or the Mac, and happy to stay here, this doesn't much affect you beyond "there but for the grace of $DEITY go I." Microsoft folks have C#, designed by Anders Hejlsberg, who previously had designed and led development of the sublime Delphi language. C# essentially takes much of the object model from Delphi, does its best to avoid the various land mines encountered by Java and particularly C++ while remaining quite learnable and conceptually familiar to refugees from those languages.

On the Mac, Objective-C is the language of choice, providing commendably clean, understandable object-oriented wrapping around the nearly-universal C language (which is still fully usable). Objective-C is the reason most Mac-and-other-platform developers cite for their high productivity writing Mac code, along with the excellent Mac OS X system libraries. Objective-C is supported on other platforms, but the community of users on non-Apple or -NeXT systems is relatively small.

Google created the Go language and first made it public in late 2009. The initial design was led by Robert Griesemer, Rob Pike and Ken Thompson. Pike and Thompson are (or should be) very well-known to C and Unix developers; they led the pioneers. Robert Griesemer, currently with Google (obviously), was apparently previously best known in the Oracle data-warehousing community, but a quick search turns up a few interesting-sounding language-related hits prior to Go.

Go is described as "simple, fast, concurrent, safe, fun, " and open source. It doesn't try to be all possible things to all possible users; it has a relatively small, conceptually achievable mission, and leaves everything else to either The Future™ or some to-be-promulgated extension system.

Go is small, young, has a corporate sponsor acting as benevolent dictator with an increasingly active community around the language and, if it doesn't fall into the pits of internal dissension or corporate abandonment (as, say OpenSolaris has), looks set to keep that community happily coding away for some years to come. Given a choice for a greenfield project between C and Go, I can see many possibilities where I'd choose Go. I can see far fewer similar instances where I'd choose C++ over either Go or D.

D is another, rather different, successor language to C++ (which, as the name states, considers itself an improved or "incremented C."). Where Go is starting small and building out as a community effort, D was designed by Walter Bright, another C language luminary. He created the first native compiler for C++ (i.e., the first compiler not producing intermediate C code to be compiled "again"). Unlike Go, D's reference implementation is a revenue-generating commercial product. Unlike Go, D has an extremely rich language and set of core libraries; one can argue that these features make it closer to C++ than to the other languages discussed here. Yet it is that richness, combined with understanding of the (two major competing) leading implementations, that makes D attractive for C++ and Java refugees; they can implement the types of programs as before in a language that is cleaner, better-defined and more understandable than C++ can ever be.

D2, or the second major specification of the D language, appears to have wider community acceptance than D1 did, and we confidently expect multiple solid implementations (including GCC and hopefully LLVM)... if the larger community gets its house in order. I wrote a comment on Leandro Lucarella's blog where I said that "Go could be the kick in the pants that the D people need to finally get their thumb out and start doing things "right" (from an outside-Digital Mars perspective)."

Competition is generally a Good Thing™ for the customers/users of the competing ideas or products. It's arguably had a spotty record in the personal-computing industry (85%+ market share is not a healthy sign), but the successor-to-C++ competition looks to be entering a new and more active phase... from which we can all hope to benefit.

Saturday, 20 February 2010

Companies Lose their Minds, Then their Partners, Then their Customers, Then...

Following is a comment which I posted to Jason Kincaid's article on TechCrunch, "Why Apple's New Ban Against Sexy Apps is Scary". I don't know why Apple seem to be deliberately shooting themselves in so many ways on the iPhone recently; I am sure that they are leaving golden opportunities for Palm, Android and anybody else who isn't Apple or Microsoft.

Even if you're not developing for the iPhone or even for the Mac, this whole drift should concern you — because its most likely effect is going to be that you have fewer choices in what should be a rapidly-expanding marketplace.

Thanks for reading.


Exactly; they're pulling a Singapore here. They're saying "we're better than anybody else" because they've got this App Store with hundreds of thousands of titles, and hundreds of useful apps. Then they turn around and say "we're the only game in town; we're not going to let you sell your apps to customers any way except through us — and oh, yeah, we can be completely arbitrary and capricious before, during and after the fact."

Let me be clear: up until very recently, I've been an unalloyed Apple fan; the only sensible response to 25+ years of Windows development and user support and 10 years hitting similar but different walls in Linux. I'm typing this on one of the two Macs sitting on my desk. I've got logs and statistics that prove I'm far more productive on my worst Mac days than I ever was on my best Windows days. And I've had several Switcher clients over the past few years who say the same thing.

I can write and sell any app I want on the Mac; Apple even give me all the (quite good) tools I need right in the box. I can use any app I want to on my Mac; the average quality level is so far above Windows and Linux apps it's not even funny. In neither of those do I need the permission of Apple or anyone else outside the parties to the transaction involved. Apple do have good support for publicising Mac apps; browse http://www.apple.com/downloads/ to see a (far earlier) way they've done it right. But developers don't have to use their advertising platform.

With the iPhone, and soon the iPad, they're doing things in a very untraditionally-Apple way: they're going far out of their way to alienate and harm developers. You know, those people who create the things that make people want to use the iPhone in the first place. And a lot of us are either leaving the platform or never getting into iPhone development in the first place.

And that can't be healthy for the long-term success of the Apple mobile platform (iPhone/iPad/iWhatComesNext). As a user, as a developer, as a shareholder, that disturbs me, as I believe it should disturb anyone who cares about this industry.

Wednesday, 10 February 2010

NIH v. An Embarrassment of Riches

One thing most good developers learn early on is not to "reinvent" basic technology for each new project they work on, The common, often corporate, antithesis to this is NIH, or "Not Invented Here." But sometimes, it's hard to decide which "giants" one wants to "stand on the shoulders of."

I've recently done a couple of mid-sized Web projects using PHP and the Kohana framework. A framework, as most readers know, is useful a) by helping you work faster b) by including a lot of usually-good code you don't have to write and maintain (but you should understand!). Good frameworks encourage you to write your own code in a style that encourages reuse by other projects that use the same framework.

One task supported by many frameworks is logging. There have also been many "standalone" (i.e., not integrated into larger systems) logging packages. The most well-known of these, and the source of many derivatives, is the Apache log4j package for Java. This has been ported, also as an Apache project, is log4php.

Log4php has saved me countless hours of exploratory debugging. I stand firmly with the growing group of serious developers who assert that if you use a reasonably agile process (with iterative, red, green, refactor unit testing) and make good use of logging, you'll very rarely, if ever, need a traditional debugger.

What does this have to do with Kohana? Well, Kohana includes its own relatively minimalist, straightforward logging facility (implemented as static methods in the core class, grumble, grumble). There's a standard place for such logs to be written to disk, and a nice little 'debug toolbar' add-on module that lets you see logging output while you're viewing the page that generated it.

So I ought to just ditch log4php in favor of the inbuilt logging system when I'm developing Kohana apps, right? Not so fast...

Log4php, as does log4j, has far more flexibility. I can log output from different sources to different places (file, system log, console, database, etc.), have messages written to more than one place (e.g., console and file), and so on. Kohana's logging API is too simple for that.

With log4php, I have total control over the logging output based on configuration information stored in an external file, not in the code itself. That means I can fiddle with the configuration during development, even deploy the application, without having to make any code changes to control logging output. The fewer times I have to touch my code, the less likely I am to inadvertently break something. Kohana? I only have one logging stream that has to be controlled within my code, by making Kohana method calls.

Many experienced developers of object-oriented software are uncomfortable with putting more than one logical feature into a class (or closely-related set of classes). Why carry around overhead you don't use, especially when your framework offers a nice extension capability via "modules" and "helpers"?. While there may sometimes be arguments for doing so (the PHP interpreter is notoriously slow, especially using dynamic features like reflection), I have always failed to understand how aggregating large chunks of your omniverse into a Grand Unified God Object™ pays dividends over the life of the project.

So, for now, I'll continue using log4php as a standalone tool in my various PHP development projects (including those based on Kohana). One thing that just went onto my "nice to do when I get around to it" list is to implement a module or similar add-on that would more cleanly integrate log4php into the surrounding Kohana framework.

This whole episode has raised my metaphorical eyebrow a bit. There are "best practices" for developing in OO (object-oriented) languages; PHP borrows many of these from Java (along with tools like log4php and PHPUnit, the de facto standard unit-test framework). I did a fairly exhaustive survey of the available PHP frameworks before starting to use Kohana. I chose it because it wasn't a "everything including several kitchen sinks" tool like Zend, it wasn't bending over backwards to support obsolete language misfeatures left over from PHP 4, and it has what looks to be a pretty healthy "community" ecosystem (unlike some once-heavily-flogged "small" frameworks like Ulysses). I'm not likely to stop using Kohana very soon. I may well have to make time to participate in that community I mentioned earlier, if for no other reason that to better understand why things are the way they are.

But that's the beauty of open source, community-driven development, surely?

Sunday, 17 January 2010

Fixing A Tool That's Supposed To Help Me Fix My Code; or, Shaving a Small Yak

Well, that's a couple of hours I'd really like to have back.

One of the oldest, simplest, most generally effective debugging tools is some form of logging. Writing output to a file, to a database, or to the console gives the developer a window into the workings of code, as it's being executed.

The long-time "gold standard" for such tools has been the Apache Foundation's log4j package, which supports Java. As with many tools (e.g., PHPUnit), a good Java tool was ported to, or reimplemented in, PHP. log4php is uses the same configuration files and (as much as possible) the same APIs as log4j. However, as PHP and Java are (significantly) different languages, liberties had to be taken in various places. Add to this the fact that PHP has been undergoing significant change in the last couple of years (moving from version 5.2 to the significantly different 5.3 as we wait for the overdue, even more different, 6.0), and a famous warning comes to mind when attempting to use the tool.

Here be dragons.

The devil, the saying goes, is in the details, and several details of log4php are broken in various ways. Of course, not all the breakage gives immediately useful information on how to repair it.

Take, as an example, the helper class method LoggerPatternConverter::spacePad(), reproduced here:

    /**
     * Fast space padding method.
     *
     * @param string    $sbuf      string buffer
     * @param integer   $length    pad length
     *
     * @todo reimplement using PHP string functions
     */
    public function spacePad($sbuf, $length) {
        while($length >= 32) {
          $sbuf .= $GLOBALS['log4php.LoggerPatternConverter.spaces'][5];
          $length -= 32;
        }
        
        for($i = 4; $i >= 0; $i--) {    
            if(($length & (1<<$i)) != 0) {
                $sbuf .= $GLOBALS['log4php.LoggerPatternConverter.spaces'][$i];
            }
        }

        // $sbuf = str_pad($sbuf, $length);
    }

Several serious issues are obvious here, the most egregious of which is acknowledged in the @todo note: "reimplement using PHP string functions." The $GLOBALS item being referenced is initialized at the top of the source file:

$GLOBALS['log4php.LoggerPatternConverter.spaces'] = array(
    " ", // 1 space
    "  ", // 2 spaces
    "    ", // 4 spaces
    "        ", // 8 spaces
    "                ", // 16 spaces
    "                                " ); // 32 spaces

If you feel yourself wanting to vomit upon and/or punch out some spectacularly clueless coder, you have my sympathies.

The crux of the problem is that the function contains seriously invalid code, at least as of PHP 5.3. Worse, the error messages that are given when the bug is exercised are extremely opaque, and a Google search produces absolutely no useful information.

The key, as usual, is in the stack trace emitted after PHP hits a fatal error.

To make a long story short(er), the global can be completely eliminated (it's no longer legal anyway), and the code can be refactored so:

    public function spacePad(&$sbuf, $length) {
        $sbuf .= str_repeat( " ", $length );
    }

Of course, this makes the entire method redundant; the built-in str_repeat() function should be called wherever the method is called.

I'll make that change and submit a patch upstream tomorrow... errr, later today; it's coming up on 1 AM here now.

But at least now I can use log4php to help me trace through the code I was trying to fix when this whole thing blew up in my face.

At least it's open source under a free license.

Tuesday, 24 February 2009

Mac OS X is BSD Unix. Except when it's Different.

One of the things that a BSD Unix admin learns to rely on is the "ports" collection, a cornucopia of packages that can be installed and managed by the particular BSD system's built-in package manager: pkgsrc for NetBSD, pkg_add for FreeBSD, and so on. In nearly all BSD systems, the port/package manager is part of the basic system (akin to APT under Debian Linux).

Mac OS X is BSD Unix "under the hood," specifically Darwin and, indirectly, FreeBSD.

This provides the Mac user who has significant BSD experience with a nice, comfy security blanket. This blanket has a few stray threads, however. One of these is the package-management system and ports.

Software is customarily installed on Mac OS X systems from disk images, or .dmg files. When opened, these files are mounted into the OS X filesystem and appear as volumes, equivalent to "real" disks. The window that the Finder opens for that volume customarily contains an icon for the application to be installed and a shortcut to the Applications folder. Installation usually consists of merely dragging the application icon onto the shortcut (or into any other desired folder). Under the hood, things are slightly more complex, but two points should be borne in mind.

First, there is no true Grand Unified Software Manager in Mac OS that is comparable to APT under Debian Linux, or even the "Add or Remove Programs" item in Microsoft Windows' Control Panel. Uninstalling a program ordinarily consists of dragging the program icon to the Trash or running a program-specific uninstaller (usually found on the installation disk image).

Second, while there is a "ports" implementation for Mac OS X (MacPorts), it isn't a truly native part of the operating system. More seriously, the versions of software ports which are maintained in its ports list are not always the most recent version available from their respective maintainer. Installing an application via MacPorts, installing a newer version through the customary method, and attempting to use MacPorts to maintain the conflated software can quite easily introduce confusing disparities into the system, with potentially destabilizing effect.

Go back and read that last paragraph again, especially the final sentence. Mac OS X, meet BSD Unix. Touch gloves, return to your corners, and wait for the bell.

Most users will never run into any problems, simply because most Mac users make little or no use of MacPorts (or any other command-line-oriented system management tool). MacPorts users are (almost by definition) likely to be experienced Unix admins who pine for the centralized simplicity of their workhorse software-management system. Informal research suggests that many, if not most, MacPorts users are active developers of Unix and/or Mac software. In other words, we're all expected to be grown-ups capable of managing our own systems, trading away the soft, easy-to-use graphical installation for a wider variety of nuts-and-bolts-level packages.

So why is any of this a problem for me? Why am I consuming your precious time (and mine) blathering on about details which most interested people already know, and most who don't, probably aren't? As a bit of a public mea culpa and a warning to others to pay attention when mixing installation models.

I had previously installed version 8.2 of the PostgreSQL database server on my Mac via MacPorts. Returning to it later, I realized that the installation had not completely succeeded: the server was not automatically starting when I booted the system, and the Postres tools were not in my PATH. After a bit of Googling, I came across a few links recommending the PostgreSQL 8.3.6 disk images on the KyngChaos Wiki. The server failed to start as expected.

Remembering that I had previously installed 8.2 the "ports" way, I first uninstalled the newly-installed and -broken 8.3.6. I then ran port uninstall postgresql82 && port clean postgresql82 in an apparently successful attempt to clean up the preexisting mess, after which the KyngChaos disk images installed correctly and (thus far) work properly.

This once again points out the usefulness of keeping a personal system-management Wiki as a catchall for information like this — both to help diagnose future problems, and (especially) to avoid them altogether. These can be dirt simple; I use Dokuwiki, and have for over a year now. Forewarned (especially by yourself) is forearmed, after all.

Just thought I'd get this off my chest.

Tuesday, 16 December 2008

Happy Updating....

If you're a Windows usee with a few years' experience, you've encountered the rare, monumental and monolithic Service Packs that Micorosoft release on an intermittent basis (as one writer put it, "once every blue moon that falls on a Patch Tuesday"). They're almost always rollups of a large number of security patches, with more added besides. Rarely, with the notable (and very welcome at the time) exception of Windows XP Service Pack 2, is significant user-visible functionality added. Now that SP3 has been out for seven months or so, it's interesting to see how many individuals and businesses (especially SMEs) haven't updated to it yet. While I understand, from direct personal experience, the uncertainty of "do I trust this not to break anything major?" (that is, "anything I use and care about?"), I have always advised installing major updates (and all security updates) as quickly as practical. Given the fact that there will always be more gaping insecurities in Windows, closing all the barn doors that you can just seems the most prudent course of action.

I got to thinking about this a few minutes ago, while working merrily away on my iMac. Software Update, the Mac equivalent of Windows' Microsoft Update, popped up, notifying me that it had downloaded the update for Mac OS X 10.5.6, and did I want to install it now? I agreed, typed my password when requested (to accept that a potentially system-altering event was about to take place, and approve the action), and three minutes later, I was logged in and working again.

Why is this blogworthy? Let's go back and look at the comparison again. In effect, this was Service Pack 6 for Mac OS X 10.5. Bear in mind that 10.5.5 was released precisely three months before the latest update, and 10.5.0 was released on 26 October 2007, just under 14 months ago. "Switchers" from Windows to Mac quickly become accustomed to a more pro-active yet gentle and predictable update schedule than their Windows counterparts. The vast majority of Mac users whom I've spoken with share my experience of never having had an update visibly break a previously working system. This cannot be said for Redmond's consumers; witness the flurry of application and driver updates that directly follow Windows service packs. XP SP2, as necessary and useful as it was, broke more systems than I or several colleagues can remember any single service pack doing previously...by changing behavior that those programs had taken advantage of or worked around. Again, the typical Mac customer doesn't have that kind of experience. Things that work, just tend to stay working.

Contrast this with Linux systems, where almost every day seems to bring updates to one group of packages or another, and distributions vary wildly in the amount of attention paid to integrating the disparate packages, or at least ensuring that they don't step on each other. Some recent releases have greatly improved things, but that's another blog entry. Linux has historically assumed that there is reasonably competent management of an installed system, and offers resources sufficient for almost anyone to become so. Again, recent releases make this much easier.

Windows, on the other hand, essentially requires a knowledgeable, properly-equipped and -staffed support team to keep the system working with a minimum of trouble; the great marketing triumph of Microsoft has been to both convince consumers that "arcane" knowledge is unnecessary while simultaneously encouraging the "I'm too dumb to know anything about computers" mentality — from people who still pony up for the next hit on the crack pipe. Show me another consumer product that disrespects its paying customers to that degree without going belly-up faster than you can say "customer service". It's a regular software Stockholm syndrome.

The truth will set you free, an old saying tells us. Free Software proponents (contrast with open source software) like to talk about "free as in speech" and "free as in beer". Personally, after over ten years of Linux and twenty of Windows, I'm much more attracted by a different freedom: the freedom to use the computer as a tool to do interesting things and/or have interesting experiences, without having to worry overmuch about any runes and incantations needed to keep it that way.

Wednesday, 23 July 2008

Differences that Make Differences Are Differences

(as opposed to the Scottish proverb, "a difference that makes no difference, is no difference")

This is a very long post. I'll likely come back and revisit it later, breaking it up into two or three smaller ones. But for now, please dip your oar in my stream of consciousness.

I was hanging around on the Freenode IRC network earlier this evening, in some of my usual channels, and witnessed a Windows zealot and an ABMer going at it. Now, ordinarily, this is as interesting as watching paint dry and as full of useful, current information as a 1954 edition of ÐŸÑ€Ð°Ð²Ð´Ð°. But there was one bit that caught my eye (nicknames modified for obfuscation):

FriendOfBill: Admit it; Microsoft can outmarket anybody.
MrABM: Sure. But marketing is not great software.
FriendOfBill: So?
MrABM: So... on Windows you pay for a system and apps that aren't worth the price, on Linux you have free apps that are either priceless or worth almost what you pay (but you can fix them if you want to), and on the Mac, you have a lot of inexpensive shareware that's generally at least pretty good, and commercial apps that are much better. THAT's why Microsoft is junk... they ship crap that can't be fixed by anyone else.
FriendOfBill: So you're saying that the Linux crap is good because it can be fixed, and the Mac being locked in is OK because it's great, but Windows is junk because it's neither great nor fixable?
MrABM: Exactly. Couldn't have said it better myself.

Now...that got me to thinking. Both of these guys were absolutely right, in my opinion. Microsoft is, without question, one of the greatest marketing phenomena in the history of software, if not of the world. But it is unoriginal crap. (Quick: Name one successful Microsoft product that wasn't bought or otherwise acquired from outside. Internet Explorer? Nope. PowerPoint? Try again.) Any software system that convinces otherwise ordinary people that they are "stupid" and "unable to get this 'computer' thing figured out" is not a net improvement in the world, in my view. I've been using and developing for Windows as long as there's been a 'Windows'; I think I've earned the opinion.

Linux? Sure, which one? As Grace Hopper famously might have said, "The wonderful thing about standards is that there are so many of them to choose from." (Relevant to The Other Side: "The most dangerous phrase in the language is, 'We've always done it this way.'") As can be easily demonstrated at the DistroWatch.com search page, there are literally hundreds of active "major" distributions; the nature of Free Software is such that nobody can ever know with certainty how many "minor" variants there are (the rabbits in Australia apparently served as inspiration here). Since every distribution has, by definition, some difference with others, it is sometimes difficult to guarantee that programs built on one Linux system will work properly on another. The traditional solution is to compile from source locally with the help of ingenious tools like autoconf. Though this (usually) can be made to work, it disproportionately rewards deep system knowledge to solve problems. The "real" fix has been the coalescence of large ecosystems around a limited number of "base" systems (Debian/Ubuntu, Red Hat, Slackware) with businesses offering testing and certification services. Sure, it passes the "grandma test"....once it's set up and working.

The Macintosh is, and has been for many years, the easiest system for novice users to learn to use quickly. Part of that is due to Apple's legendary Human Interface Guidelines; paired with the tools and frameworks freely available, it is far easier for developers to comply with the Guidelines than to invent their own interface. The current generation of systems, Mac OS X, is based on industry-standard, highly-reliable core components (BSD Unix, the Mach microkernel, etc.) which underpin an extremely consistent yet powerful interface. A vast improvement over famously troubled earlier versions of the system, this has been proven in the field to be proof against most "grandmas".

A slight fugue here; I am active in the Singapore Linux Meetup Group. At our July meeting, there was an animated discussion concerning the upcoming annual Software Freedom Day events. The question before the group was how to organize a local event that would advance the event's purpose: promoting the use of free and open source software for both applications and systems. What I understood the consensus to be basically worked out as "let's show people all the cool stuff they can do, and especially let's show them how they can use free software, especially applications, to do all the stuff they do right now with Windows." The standard example is someone browsing the Web with Firefox instead of Internet Explorer; once he's happy with replacement apps running under Windows, it's easier to move to a non-Windows system (e.g., Linux) with the same apps and interface. That strategy has worked well, particularly in the last couple of years (look at Firefox itself and especially Ubuntu Linux as examples). The one fly in the ointment is that other parts of the system don't always feel the same. (Try watching a novice user set up a Winprinter or wireless networking on a laptop.) The system is free ("as in speech" and "as in beer") but it is most definitely not free in terms of the time needed to get things working sometimes... and that cannot always be predicted reliably.

The Mac, by comparison, is free in neither sense, even though the system software is based on open-source software, and many open-source applications (Firefox, the Apache Web server) run just fine. Apache, for instance, is already installed on every current Mac when you first start it up. But many of the truly "Mac-like" apps — games, the IRC program I use, a nifty note organizer, and so on) are either shareware or full commercial applications (like Adobe Photoshop CS3 or Microsoft Word:mac). You pay money for them, and you (usually) don't get the source code or the same rights that you do under licenses like the GNU GPL.

But you get something else, by and large: a piece of software that is far more likely to "just work" in an expectable, explorable fashion. Useful, interesting features, not always just more bloat to put a few more bullet items on the marketing slides. And that gives you a different kind of freedom, one summed up by an IT-support joke at a company I used to work for, more than ten years ago.

Q: What's the difference between a Windows usee and a Mac user?
A: The Windows usee talks about everything he had to do to get his work done. The Mac user...shows you all the great work she got done.
That freedom may be neither economic or ideological. But, especially for those who feel that the "Open Source v. Free Software" dispute sounds like a less entertaining Miller Lite "Tastes Great/Less Filling" schtick, for those who realize that the hour they spend fixing a problem will never be lived again, this offers a different kind of freedom: the freedom to use the computer as an appliance for interesting, intellectually stimulating activity.

And having the freedom to choose between the other, seemingly competing freedoms... is the greatest of these.