Showing posts with label open source. Show all posts
Showing posts with label open source. Show all posts

Tuesday, 28 September 2010

Don't Waste My Time.

What follows is a critique and a complaint, not a rant. What's the difference, you ask? A rant, in my view, is a primal scream of frustration and near-impotent rage that often doesn't let a few (allegedly) non-essential facts get in the way of a good thrashing. It also takes far more energy than I'm able to summon at present, for reasons that may become clear below.


As I've mentioned previously, I've been kicking the tires on the (relatively new) Kohana 3.0 application framework for PHP Web development. I'd previously used (and enthused about) the 2.3.x ("KO2") release; I was itching to get my hands dirty with 3.0.1 After a couple of dozen hours spent plinking away at it, however, I'm reevaluating my choices.

Two of the things that endeared the earlier system to me were the good-to-excellent, copious documentation and the active, always-willing-to-help user community (which, quite frankly, are the backbones of any open-source project).

KO3 is a different proposition entirely than KO2. As I write this, the eighth bug-fix update to 3.0 is available on the Web site. Since this is a new "the same, but different" project, all the users are starting fresh, so there can't be the same level of everyone-helps-everyone-else camaraderie as with KO2. This puts a foreseeable, far greater burden on the documentation to help people get productive as quickly and effectively as possible. However, the documentation, quite charitably, remains in a beta-quality state. This is true in comparison to both the earlier release and to other PHP Web frameworks such as Symfony2, CakePHP, FLOW3 and (notoriously) Zend. With most of these other frameworks, as with KO2, it was a quick, straightforward process figuring out how to get from the "hello-world" style demos, to being able to create a useful-but-simple site, to branching out from there. It's taken four times longer to get half as far with KO3 as with KO2.

Judging by comments in the Kohana support forums, I'm not alone in this; the documentation has been roundly panned by virtually all users who've bothered to comment. There's been far too much defensive "no, it isn't bad, you just need to read a bit more, and use the source, Luke" attitude from the project folks. During the KO2 lifecycle, the attitude I understood from their responses was more along the lines of "we know there are a few problems; we're working on them," quickly followed by "take a look at this." I don't know if 3.0 is so much more complex than 2.x that they simply don't have the bandwidth to document things to their previous standards. Frankly, I don't care any more.

I've decided that my existing projects that have been started in Kohana 2.3 will remain there; possibly moving to 2.4 when it becomes the officially-supported release. But I do not plan to invest any more time and effort into Kohana 3.0, at least until the new project has had some time to mature. I fully recognise the potentially self-defeating attitude inherent in that viewpoint. Virtually any open-source project depends on its community to "plug the holes" that the "official" project maintainers don't have time for or deliberately leave as open questions for the community. Well-run community projects are extremely collaborative, communication-rich environments.

Other projects take a "vendor/product" approach, essentially saying "Here's everything you'll need, soup-to-nuts; we'd really appreciate it if you built around the edges and took things deeper in your direction of interest, but the core that we're responsible for is solid." Those "vendors", the Zends and Sensio Labs of the world, have rich open-source offerings that they use as a platform to build (or offer) what actually pays the bills.

While I have a strong philosophical and experiential preference for community-driven (or at least -oriented) projects, there have to be times when I just want to Get Things Done rather than embark on an endless voyage of discovery.3 It is at those times that I'll reach for something that "just works" well enough for me to accomplish whatever it is I'm trying to do at the moment, whether it's to write a book or to bring up a client's Web site. I know and accept that any new tool, or new version of a tool I've used previously, will require a certain amount of time to "get up to speed" on. I don't know everything (or necessarily anything) before I learn it; the most I can hope for (and what I really do insist on) is that things make sense within the logical and semantic frameworks4 that they're expressed in, and that that expression is accessible, complete and correct enough that I can feel like I'm making progress. This invariably involves some sort of documentation; whether a "conventional" or online manual, a Wiki, or a user forum; the format is less important to me than the attributes I mentioned earlier.

Kohana 3.0, as it currently stands, does not meet that standard for me. And so I'm back in a feverish learn-and-evaluate mode with at least two other frameworks. I have projects to finish, and I have several chapters of a book I'm working on that had initially been written around Kohana 2, and will now need to be substantially redone.5

I intend to give each of those new candidate frameworks the same amount of time that it took me to get productive in Kohana 2.x (which was significantly less than the "industry leader," as I've previously mentioned). This is going to be an interesting week and a half or so, in the Chinese-curse sense of the word.

Footnotes:

1. Technical reasons for moving to KO3 included the switch from a model-view-controller (MVC) architecture to hierarchical MVC (or HMVC); if you know what these mean, you should know this is a Very Big Deal™. Additionally, I've found it a Very Bad Thing to tie myself to a legacy (obsolescent) project, and the end of KO2 is being made very plain on the Kohana Web site.(Return)

2. If the new, pre-release Symfony 2 code is as good as the documentation, we're in for a real treat.

3. I am typing this on a Mac rather than a Linux box, after all (though I have three Linux VMs running at the moment).(Return)

4. This implies, of course, that there are "logical and semantic frameworks" sufficiently visible, understandable and relevant to the task at hand. (Return)

5. I certainly don't want to fall into the same trap as a previous series of PHP books, which relied on the obsolete and inadequate Ulysses framework. Ulysses has been justly and productively panned, which has to reflect poorly on the aforementioned book (which I happen to own) and its authors. (Return)

Tuesday, 14 September 2010

Saving Effort and Time is Hard Work

As both my regular readers well know, I've been using a couple of Macs1 as my main systems for some time now. As many (if not most, these days) Web developers do, I run different systems (Windows and a raft of Linuxes) using VMWare Fusion so that I can do various testing activities.

Many Linux distributions come with some form of automation support for installation and updates2. Several of these use the (generally excellent) Red Hat Anaconda installer and its automation scheme, Kickstart. Red Hat and the user community offer copious, free documentation to help the new-to-automation system administrator get started.

If you're doing a relatively plain-vanilla installation, this is trivially easy. After installing a new Fedora system (image), for example, there is a file named anaconda-ks.cfg in the root user's home directory, which can be used to either replay the just-completed installation or as a basis for further customisation. To reuse, save the file to a USB key or to a suitable location on your local network, and you can replay the installation at will.

Customising the installation further, naturally, takes significant additional effort — almost as "significant" as the effort required to do the same level of customisation manually during installation. The saving grace, of course, is that this only needs to be done once for a given version of a given distribution. Some relatively minor tinkering will be needed to move from one version to another (say, Fedora 13 to 14), and an unknowable-in-advance amount of effort needed to adapt the Kickstart configuration to a new, different distribution (such as Mandriva), since packages on offer as well as package names themselves can vary between distros3.

It's almost enough to make me pull both remaining hairs out. For several years, I have had a manual installation procedure for Fedora documented on my internal Wiki. That process, however, leaves something to be desired, mainly because it is an intermixture of manual steps and small shell scripts that install and configure various bits and pieces. Having a fire-and-forget system like Kickstart (that could then be adapted to other distributions as well), is an extremely seductive idea.

It doesn't help that the Kickstart Configurator on Fedora 13, which provides a (relatively) easy-to-use GUI for configuring and specifying the contents of a Kickstart configuration file, works inconsistently. Using the default GNOME desktop environment, one of my two Fedora VMs fails to display the application menu, which is used for tasks such as loading and saving configuration files. Under the alternate (for Fedora) KDE desktop, the menu appears and functions correctly.

One of the things I might get around to eventually is to write an alternative automated-installation-configuration utility. Being able to install a common set of packages across RPM-based (Red Hat-style) Linuxes such as Fedora, Mandriva and CentOS as well as Debian and its derivatives (like Ubuntu), and maybe one or two others besides, would be a Very Handy Thing to Have™.

That is, once I scavenge enough pet-project time to do it, of course. For now, it's back to the nuances of Kickstart.

Footnotes:

1. an iMac and a MacBook Pro. (Return)

2. Microsoft offer this for Windows, of course, but they only support Vista SP1, Windows 7 and Windows Server 2008. No XP. Oops. (Return)

3. Jargon. The term distro is commonly used by Linux professionals and enthusiasts as an abbreviation for "distribution"; a collection of GNU and other tools and applications built on top of the Linux kernel. (Return)

Wednesday, 14 April 2010

Process: Still 'garbage in, garbage out,', but...

...you can protect yourself and your team. Even if we're talking about topics that everybody's rehashed since the Pleistocene (or at least since the UNIVAC I).

Traditional, command-and-control, bureaucratic/structured/waterfall development process managed to get (quite?) a few things right (especially given the circumstances). One of these was code review.

Done right, a formal code review process can help the team improve a software project more quickly and effectively than ad-hoc "exploration and discovery" by individual team members. Many projects, including essentially all continuing open-source projects that I've seen, use review as a tool to (among other things) help new teammates get up to speed with the project. While it can certainly be argued that pair programming provides a more effective means to that particular end, they (and honestly, most agile processes) tend to focus on the immediate, detail-level view of a project. Good reviews (including but not limited to group code reviews) can identify and evaluate issues that are not as visibly obvious "down on the ground." (Cédric Beust, of TestNG and Android fame, has a nice discussion on his blog about why code reviews are good for you.

Done wrong, and 'wrong' here often means "as a means of control by non-technical managers, either so that they can honour arbitrary standards in the breach or so that they can call out and publicly humiliate selected victims," code reviews are nearly Pure Evil™, good mostly for causing incalculable harm and driving sentient developers in search of more humane tools – which tend (nowadays) to be identified with agile development. Many individuals prominent in developer punditry regularly badmouth reviews altogether, declaring that if you adopt the currently-trendy process, you won't ever have to do those eeeeeeeeevil code reviews ever again. Honest. Well, unless.... (check the fine print carefully, friends!)

Which brings us to the point of why I'm bloviating today:

  1. Code reviews, done right, are quite useful;

  2. Traditional, "camp-out-in-the-conference-room" code reviews are impractical in today's distributed, virtual-team environment (as well as being spectacularly inefficient), and

  3. That latter problem has been sorted, in several different ways.

This topic came up after some tortuous spelunking following an essentially unrelated tweet, eventually leading me to Marc Hedlund's Code Review Redux... post on O'Reilly Radar (and then to his earlier review of Review Board and to numerous other similar projects.

The thinking goes something like, Hey, we've got all these "dashboards" for CRM, ERP, LSMFT and the like; why not build a workflow around one that's actually useful to project teams. And these tools fit the bill – helping teams integrate a managed approach to (any of several different flavours of) code review into their development workflow. This generally gets placed either immediately before or immediately after a new, or newly-modified, project artifact is checked into the project's SCM. Many people, including Beust in the link above, prefer to review code after it's been checked in; others, including me, prefer reviews to take place before checkin, so as to not risk breaking any builds that pull directly from the SCM.

We've been using collaborative tools like Wikis for enough years now that any self-respecting project has one. They've proven very useful for capturing and organising collective knowledge, but they are not at their best for tracking changes to external resources, like files in an SCM. (Trac mostly finesses this, by blurring the lines between a wiki, an SCM and an issue tracker.) So, a consensus seems to be forming, across several different projects, that argues for

  • a "review dashboard," showing a drillable snapshot of the project's code, including completed, in-process and pending reviews;

  • a discussion system, supporting topics related to individual reviews, groups of reviews based on topics such as features, or the project as a whole; these discussions can be searched and referenced/linked to; and

  • integration support for widely-used SCM and issue-tracking systems like Subversion and Mantis.

Effective use of such a tool, whatever your process, will help you create better software by tying reviews into the collaborative process. The Web-based versions in particular remove physical location as a condition for review. Having such a tool that works together with your existing (you do have these, yes?) source-code management and issue-tracking systems makes it much harder to have code in an unknown, unknowable state in your project. In an agile development group, this will be one of the first places you look for insight into the cause of problems discovered during automated build or testing, along with your SCM history.

And if you're in a shop that doesn't use these processes, why not?


On a personal note, this represents my return to blogging after far, far too long buried under Other Stuff™. The spectacularly imminent crises are now (mostly, hopefully) put out of our misery now; you should see me posting more regularly here for a while. As always, your comments are most welcome; this should be a discussion, not a broadcast!

Wednesday, 10 February 2010

NIH v. An Embarrassment of Riches

One thing most good developers learn early on is not to "reinvent" basic technology for each new project they work on, The common, often corporate, antithesis to this is NIH, or "Not Invented Here." But sometimes, it's hard to decide which "giants" one wants to "stand on the shoulders of."

I've recently done a couple of mid-sized Web projects using PHP and the Kohana framework. A framework, as most readers know, is useful a) by helping you work faster b) by including a lot of usually-good code you don't have to write and maintain (but you should understand!). Good frameworks encourage you to write your own code in a style that encourages reuse by other projects that use the same framework.

One task supported by many frameworks is logging. There have also been many "standalone" (i.e., not integrated into larger systems) logging packages. The most well-known of these, and the source of many derivatives, is the Apache log4j package for Java. This has been ported, also as an Apache project, is log4php.

Log4php has saved me countless hours of exploratory debugging. I stand firmly with the growing group of serious developers who assert that if you use a reasonably agile process (with iterative, red, green, refactor unit testing) and make good use of logging, you'll very rarely, if ever, need a traditional debugger.

What does this have to do with Kohana? Well, Kohana includes its own relatively minimalist, straightforward logging facility (implemented as static methods in the core class, grumble, grumble). There's a standard place for such logs to be written to disk, and a nice little 'debug toolbar' add-on module that lets you see logging output while you're viewing the page that generated it.

So I ought to just ditch log4php in favor of the inbuilt logging system when I'm developing Kohana apps, right? Not so fast...

Log4php, as does log4j, has far more flexibility. I can log output from different sources to different places (file, system log, console, database, etc.), have messages written to more than one place (e.g., console and file), and so on. Kohana's logging API is too simple for that.

With log4php, I have total control over the logging output based on configuration information stored in an external file, not in the code itself. That means I can fiddle with the configuration during development, even deploy the application, without having to make any code changes to control logging output. The fewer times I have to touch my code, the less likely I am to inadvertently break something. Kohana? I only have one logging stream that has to be controlled within my code, by making Kohana method calls.

Many experienced developers of object-oriented software are uncomfortable with putting more than one logical feature into a class (or closely-related set of classes). Why carry around overhead you don't use, especially when your framework offers a nice extension capability via "modules" and "helpers"?. While there may sometimes be arguments for doing so (the PHP interpreter is notoriously slow, especially using dynamic features like reflection), I have always failed to understand how aggregating large chunks of your omniverse into a Grand Unified God Object™ pays dividends over the life of the project.

So, for now, I'll continue using log4php as a standalone tool in my various PHP development projects (including those based on Kohana). One thing that just went onto my "nice to do when I get around to it" list is to implement a module or similar add-on that would more cleanly integrate log4php into the surrounding Kohana framework.

This whole episode has raised my metaphorical eyebrow a bit. There are "best practices" for developing in OO (object-oriented) languages; PHP borrows many of these from Java (along with tools like log4php and PHPUnit, the de facto standard unit-test framework). I did a fairly exhaustive survey of the available PHP frameworks before starting to use Kohana. I chose it because it wasn't a "everything including several kitchen sinks" tool like Zend, it wasn't bending over backwards to support obsolete language misfeatures left over from PHP 4, and it has what looks to be a pretty healthy "community" ecosystem (unlike some once-heavily-flogged "small" frameworks like Ulysses). I'm not likely to stop using Kohana very soon. I may well have to make time to participate in that community I mentioned earlier, if for no other reason that to better understand why things are the way they are.

But that's the beauty of open source, community-driven development, surely?

Saturday, 30 January 2010

Piling On: More Kibitzing about the iPad

What do I think about the new iPad? Glad you asked. What's that? You didn't, really – or not many of you did. But that's OK, really; at least two other writers argue that we're going from an era of "nearly universal literacy" to "nearly universal authorship", so here's my two rupiah worth.

John Gruber, the blogger behind Daring Fireball, is a justly respected voice on a variety of topics, notably all things Apple. Two of his posts on the iPad, The iPad Big Picture and Various and Assorted Thoughts... are a good starting point if you've somehow just managed to crawl out of a bubble that protected you from hearing anything about it for the past, oh, six months or so, or more crucially the last three days. I also enjoyed reading Stephen Fry's thoughts on the device.

But one bit I would recommend above all to anybody who is thinking about "what does this iPad thing/phenomenon really mean?", or anyone who just cares about free expression, open societies, and all the other progress that humanity has made these past few centuries, should seriously ponder Alex Payne's thoughts On the iPad. Al3x makes some very good, if deeply disturbing, points.

As many people have pointed out, the iPad differs in one critical respect from every personal computing device as computing device that Apple have ever built, from the Apple I on up to the iMac that I am typing this on. The iPad is a device, first, foremost and specifically, for the consumption of digital "content". As James Kendrick says, Apple just want us to push the 'Buy' button, in an endless, mindless Pavlovian dystopia.

"Hang on," you say, "it can't really be as bad as all that. You're just pushing histrionics!" I truly hope so. But consider: Apple has sold every Mac, as well as all their earlier models, on the basis of a small group of generally positive, empowering ideas:

  • "It just works better."
  • "The power to be your best."
  • And, of course, "Think different".

One of the (many) things that make the Mac special is the fact that every single Mac sold comes with a copy of the full development toolset for Mac OS X, right in the box. Anybody who wants to invest the time and effort can become a Mac developer. (The money involved, of course, is in buying a Mac in the first place, but you were going to do that anyway, right?) You. The smart-aleck kid down the street. Your Aunt Tillie. Anyone. That has been one of the core strengths of the Apple Mac platform and product; the ability for anyone, without genuflecting before any sort of gatekeeper, to write anything they can conceive of, using some pretty great tools. And thousands, dozens of thousands have. You don't need to be a big corporation. You don't even need to ask Apple's permission, or use Apple's site to market your creation. All you need is an idea, some persistence, and the willingness to learn.

Apple made a radically different statement with the iPhone. If you want to write a "real" app for the iPhone, you need to submit your bits to the App Store, a process that is the seeming antithesis of open, collaborative or fair. As a practical matter, you need to join the iPhone Developer Program, and pay a fee. Doing so will subject you to various license agreements, limitations of what can be developed, and so on.

Apple justified this by saying "hey, the iPhone is a phone, an appliance. We have to keep some control over things, to make sure that our (non-technical) customers have the best possible experience — and incidentally to ensure that we continue to honor agreements we've made with carriers like AT&T." And the developer community grumbled and moaned, but large numbers went along. And the App Store, by almost any measure, is a raging success in the aggregate — even though most individual developers aren't making much at all.

Now comes the iPad, which has been widely, often dismissively, described as a "super-sized iPod Touch." Which, in several senses, it is. But whereas the "iTouch" is an accessory in one's life, being a combination PDA, music player, and (ostensibly) simple application platform, the iPad promises to be all that and more. Specifically, it's being pitched as this "magical" piece of technology which you'll "always" have at hand. Why? Well, you can listen to music on it, or watch videos, or run the same apps you ran on your iTouch or iPhone along with a new generation of "super-sized" apps. Oh, and Apple did announce that their office-software packages would be available on the iPad — so you can use it for "work stuff."

What about the kind of creative outlets that have been highlighted at each Mac introduction since the 1980s? Well, um, good luck with that. Because, even though this is a "real" computer, it's "really not"; it's an up-sized iPod Touch, which is (officially, mostly) a passive device.

The iPad is being pitched as a "magical", but inherently passive, device that consumers will use to buy (or, usually, rent) "content". There'll be "social media" like Facebook, Twitter and so on; the iPhone already supports those. Anything that can be done entirely over the Web, without the use of plugins like Java or Flash (like blogger.com, is kosher, too. But anything truly creative, "revolutionary", "game-changing", is going to have to survive the App Store gauntlet — which means that it's going to have to be consistent with Apple's view of how the iPad "should" be used. As a passive device which consumers use to buy access to content.

Think "57 Channels and Nothing On," a million times over. Perfect for the top-down, don't-make-me-think society. (I'm sure it will be very popular here in Singapore, for precisely that reason.)

Apple, you can do better. We know, because you've mostly done much better before, and encouraged us, developers and users, to do great things with what you've built for us. The iPad isn't so much a step in the wrong direction, it's a leap of faith worthy of Wile E. Coyote — but we know how far off the edge of the cliff we've gone. And the only way to go from here, is down.

Sunday, 17 January 2010

Fixing A Tool That's Supposed To Help Me Fix My Code; or, Shaving a Small Yak

Well, that's a couple of hours I'd really like to have back.

One of the oldest, simplest, most generally effective debugging tools is some form of logging. Writing output to a file, to a database, or to the console gives the developer a window into the workings of code, as it's being executed.

The long-time "gold standard" for such tools has been the Apache Foundation's log4j package, which supports Java. As with many tools (e.g., PHPUnit), a good Java tool was ported to, or reimplemented in, PHP. log4php is uses the same configuration files and (as much as possible) the same APIs as log4j. However, as PHP and Java are (significantly) different languages, liberties had to be taken in various places. Add to this the fact that PHP has been undergoing significant change in the last couple of years (moving from version 5.2 to the significantly different 5.3 as we wait for the overdue, even more different, 6.0), and a famous warning comes to mind when attempting to use the tool.

Here be dragons.

The devil, the saying goes, is in the details, and several details of log4php are broken in various ways. Of course, not all the breakage gives immediately useful information on how to repair it.

Take, as an example, the helper class method LoggerPatternConverter::spacePad(), reproduced here:

    /**
     * Fast space padding method.
     *
     * @param string    $sbuf      string buffer
     * @param integer   $length    pad length
     *
     * @todo reimplement using PHP string functions
     */
    public function spacePad($sbuf, $length) {
        while($length >= 32) {
          $sbuf .= $GLOBALS['log4php.LoggerPatternConverter.spaces'][5];
          $length -= 32;
        }
        
        for($i = 4; $i >= 0; $i--) {    
            if(($length & (1<<$i)) != 0) {
                $sbuf .= $GLOBALS['log4php.LoggerPatternConverter.spaces'][$i];
            }
        }

        // $sbuf = str_pad($sbuf, $length);
    }

Several serious issues are obvious here, the most egregious of which is acknowledged in the @todo note: "reimplement using PHP string functions." The $GLOBALS item being referenced is initialized at the top of the source file:

$GLOBALS['log4php.LoggerPatternConverter.spaces'] = array(
    " ", // 1 space
    "  ", // 2 spaces
    "    ", // 4 spaces
    "        ", // 8 spaces
    "                ", // 16 spaces
    "                                " ); // 32 spaces

If you feel yourself wanting to vomit upon and/or punch out some spectacularly clueless coder, you have my sympathies.

The crux of the problem is that the function contains seriously invalid code, at least as of PHP 5.3. Worse, the error messages that are given when the bug is exercised are extremely opaque, and a Google search produces absolutely no useful information.

The key, as usual, is in the stack trace emitted after PHP hits a fatal error.

To make a long story short(er), the global can be completely eliminated (it's no longer legal anyway), and the code can be refactored so:

    public function spacePad(&$sbuf, $length) {
        $sbuf .= str_repeat( " ", $length );
    }

Of course, this makes the entire method redundant; the built-in str_repeat() function should be called wherever the method is called.

I'll make that change and submit a patch upstream tomorrow... errr, later today; it's coming up on 1 AM here now.

But at least now I can use log4php to help me trace through the code I was trying to fix when this whole thing blew up in my face.

At least it's open source under a free license.

Thursday, 16 April 2009

OMFG, or Holy Deforestation, Batman!

As some of you know, I'm working on a book on Web development, using off-the-shelf tools (frameworks, template engines, JavaScript libraries, etc.) to leverage semantic, standards-compliant, accessible, search-friendly Websites. (That's more a matter of adjusting your development philosophy and workflow than anything else, but I digress). As part of that, I'e been doing a (reasonably) comprehensive review of PHP 5 application frameworks. You might have heard of ezComponents or CakePHP, but the 900-kg elephant in the room is definitely the Zend Framework. It ships with the Dojo JavaScript toolkit, but doesn't make it excessively difficult to mix and match others (Scriptaculous, Prototype, jQuery, etc.) if desired.

And here's another reason to call ZF the '900-kg elephant' — the programmer's reference guide (for version 1.7) weighs in at a <sarcasm>svelte</sarcasm> 1170 pages. Don't print this at home, folks. Better yet, just don't print it... either browse it online or download the PDF. Save a forest or six. For you old-timers, this will remind you quite a bit of US DOD Standard 2167A, "fondly" remembered as "documentation by the boxcar load".

Wednesday, 23 July 2008

Differences that Make Differences Are Differences

(as opposed to the Scottish proverb, "a difference that makes no difference, is no difference")

This is a very long post. I'll likely come back and revisit it later, breaking it up into two or three smaller ones. But for now, please dip your oar in my stream of consciousness.

I was hanging around on the Freenode IRC network earlier this evening, in some of my usual channels, and witnessed a Windows zealot and an ABMer going at it. Now, ordinarily, this is as interesting as watching paint dry and as full of useful, current information as a 1954 edition of ÐŸÑ€Ð°Ð²Ð´Ð°. But there was one bit that caught my eye (nicknames modified for obfuscation):

FriendOfBill: Admit it; Microsoft can outmarket anybody.
MrABM: Sure. But marketing is not great software.
FriendOfBill: So?
MrABM: So... on Windows you pay for a system and apps that aren't worth the price, on Linux you have free apps that are either priceless or worth almost what you pay (but you can fix them if you want to), and on the Mac, you have a lot of inexpensive shareware that's generally at least pretty good, and commercial apps that are much better. THAT's why Microsoft is junk... they ship crap that can't be fixed by anyone else.
FriendOfBill: So you're saying that the Linux crap is good because it can be fixed, and the Mac being locked in is OK because it's great, but Windows is junk because it's neither great nor fixable?
MrABM: Exactly. Couldn't have said it better myself.

Now...that got me to thinking. Both of these guys were absolutely right, in my opinion. Microsoft is, without question, one of the greatest marketing phenomena in the history of software, if not of the world. But it is unoriginal crap. (Quick: Name one successful Microsoft product that wasn't bought or otherwise acquired from outside. Internet Explorer? Nope. PowerPoint? Try again.) Any software system that convinces otherwise ordinary people that they are "stupid" and "unable to get this 'computer' thing figured out" is not a net improvement in the world, in my view. I've been using and developing for Windows as long as there's been a 'Windows'; I think I've earned the opinion.

Linux? Sure, which one? As Grace Hopper famously might have said, "The wonderful thing about standards is that there are so many of them to choose from." (Relevant to The Other Side: "The most dangerous phrase in the language is, 'We've always done it this way.'") As can be easily demonstrated at the DistroWatch.com search page, there are literally hundreds of active "major" distributions; the nature of Free Software is such that nobody can ever know with certainty how many "minor" variants there are (the rabbits in Australia apparently served as inspiration here). Since every distribution has, by definition, some difference with others, it is sometimes difficult to guarantee that programs built on one Linux system will work properly on another. The traditional solution is to compile from source locally with the help of ingenious tools like autoconf. Though this (usually) can be made to work, it disproportionately rewards deep system knowledge to solve problems. The "real" fix has been the coalescence of large ecosystems around a limited number of "base" systems (Debian/Ubuntu, Red Hat, Slackware) with businesses offering testing and certification services. Sure, it passes the "grandma test"....once it's set up and working.

The Macintosh is, and has been for many years, the easiest system for novice users to learn to use quickly. Part of that is due to Apple's legendary Human Interface Guidelines; paired with the tools and frameworks freely available, it is far easier for developers to comply with the Guidelines than to invent their own interface. The current generation of systems, Mac OS X, is based on industry-standard, highly-reliable core components (BSD Unix, the Mach microkernel, etc.) which underpin an extremely consistent yet powerful interface. A vast improvement over famously troubled earlier versions of the system, this has been proven in the field to be proof against most "grandmas".

A slight fugue here; I am active in the Singapore Linux Meetup Group. At our July meeting, there was an animated discussion concerning the upcoming annual Software Freedom Day events. The question before the group was how to organize a local event that would advance the event's purpose: promoting the use of free and open source software for both applications and systems. What I understood the consensus to be basically worked out as "let's show people all the cool stuff they can do, and especially let's show them how they can use free software, especially applications, to do all the stuff they do right now with Windows." The standard example is someone browsing the Web with Firefox instead of Internet Explorer; once he's happy with replacement apps running under Windows, it's easier to move to a non-Windows system (e.g., Linux) with the same apps and interface. That strategy has worked well, particularly in the last couple of years (look at Firefox itself and especially Ubuntu Linux as examples). The one fly in the ointment is that other parts of the system don't always feel the same. (Try watching a novice user set up a Winprinter or wireless networking on a laptop.) The system is free ("as in speech" and "as in beer") but it is most definitely not free in terms of the time needed to get things working sometimes... and that cannot always be predicted reliably.

The Mac, by comparison, is free in neither sense, even though the system software is based on open-source software, and many open-source applications (Firefox, the Apache Web server) run just fine. Apache, for instance, is already installed on every current Mac when you first start it up. But many of the truly "Mac-like" apps — games, the IRC program I use, a nifty note organizer, and so on) are either shareware or full commercial applications (like Adobe Photoshop CS3 or Microsoft Word:mac). You pay money for them, and you (usually) don't get the source code or the same rights that you do under licenses like the GNU GPL.

But you get something else, by and large: a piece of software that is far more likely to "just work" in an expectable, explorable fashion. Useful, interesting features, not always just more bloat to put a few more bullet items on the marketing slides. And that gives you a different kind of freedom, one summed up by an IT-support joke at a company I used to work for, more than ten years ago.

Q: What's the difference between a Windows usee and a Mac user?
A: The Windows usee talks about everything he had to do to get his work done. The Mac user...shows you all the great work she got done.
That freedom may be neither economic or ideological. But, especially for those who feel that the "Open Source v. Free Software" dispute sounds like a less entertaining Miller Lite "Tastes Great/Less Filling" schtick, for those who realize that the hour they spend fixing a problem will never be lived again, this offers a different kind of freedom: the freedom to use the computer as an appliance for interesting, intellectually stimulating activity.

And having the freedom to choose between the other, seemingly competing freedoms... is the greatest of these.