Tuesday, 28 September 2010

IE9 and Standards? Nothing to see here, folks; move along...

This is a re-post of a comment I left to the post "HTML5 Support in Internet Explorer 9 on Louis Lazaris' Impressive Webs blog (which I highly recommend for people working in or interested in Web development). I've slightly edited a few places for clarity. The comments that had been left by people were almost universally complaining about Microsoft in general and IE's continuing history of imperfect-at-best compliance with either-you-do-or-you-might-as-well-not-try standards. Having worked in and with Microsoft on several occasions, and being active in a number of open-source projects besides, I have a slightly different view, as I state below. Thanks for reading.


Actually, I'm amazed that it's progressed as much as it has.

Let me be clear; I'm not in any way praising IE; the industry as a whole needs to bury it — decisively, irretrievably and imminently. We as development professionals need to have the professional self-respect to tell current and potential clients, "We develop to current Web standards that are supported across various browsers and platforms. To the degree that any particular release of Microsoft Internet Exploder supports the markup, styling and behaviour of any particular site, that's well and good. However, without monumental additional time and budget resources, no attempt will be made to provide IE support when those resources could instead be used to improve the experience for users of modern browsers."

I firmly believe that IE hasn't progressed as far as, say, Chromium, can be laid firmly at the feet of Microsoft's existing code base. Microsoft's developers and managers are fully experienced with the reality that complete green-field rewrites of existing projects almost never succeed. They've got a code base where the list of major Windows modules and subsystems that do not have dependency relationships with IE could be read aloud comfortably in a single breath. That was done initially by choice; now, it doesn't matter how well-meaning the intentions or competent the team are, they have to live with the codebase they have. It's all legacy code. The developers at Microsoft are (with rare exception) not morons, but living in the sewer while they're trying to make something that can stand next to the shiny new browsers next door has to be a psychologically toxic exercise. Their baby is blowing up the Web left and right; they know it, and they know they can't do a damned thing about it without staging a coup d'état, replacing dozens of levels of management and senior executives and fundamentally changing the culture of the organisation. Don't hold your breath waiting for that to happen.

That isn't sour grapes or a diss against the IE developers; it's simple reality. Microsoft do some amazing things — just not so much for (or on) Windows. Unfortunately for all of us, Windows and Office for Windows are the herds of cash cows for Microsoft, and anything that could be seen as even potentially disrupting that would get shot down faster than you can say S-E-C; the investors would never stand for it. And, with the system and the rules the way they are, they'd be perfectly right. Innovation isn't as important to the bean-counters (or the regulators) as "maintaining and enhancing shareholder value," and MSFT have had enough problems with that lately. (Just compare their share values to, say, AAPL over the last ten years.) Doing anything "risky" is simply politically (and legally, really) impossible.

So, no matter how many times the IE team jump up and down and say "look at all our neat new features," without mentioning the standard features left unimplemented because they pulled off numerous miracles just making what they have work, the response has to be "nothing to see here, folks; move along."

And move along we shall.

Don't Waste My Time.

What follows is a critique and a complaint, not a rant. What's the difference, you ask? A rant, in my view, is a primal scream of frustration and near-impotent rage that often doesn't let a few (allegedly) non-essential facts get in the way of a good thrashing. It also takes far more energy than I'm able to summon at present, for reasons that may become clear below.


As I've mentioned previously, I've been kicking the tires on the (relatively new) Kohana 3.0 application framework for PHP Web development. I'd previously used (and enthused about) the 2.3.x ("KO2") release; I was itching to get my hands dirty with 3.0.1 After a couple of dozen hours spent plinking away at it, however, I'm reevaluating my choices.

Two of the things that endeared the earlier system to me were the good-to-excellent, copious documentation and the active, always-willing-to-help user community (which, quite frankly, are the backbones of any open-source project).

KO3 is a different proposition entirely than KO2. As I write this, the eighth bug-fix update to 3.0 is available on the Web site. Since this is a new "the same, but different" project, all the users are starting fresh, so there can't be the same level of everyone-helps-everyone-else camaraderie as with KO2. This puts a foreseeable, far greater burden on the documentation to help people get productive as quickly and effectively as possible. However, the documentation, quite charitably, remains in a beta-quality state. This is true in comparison to both the earlier release and to other PHP Web frameworks such as Symfony2, CakePHP, FLOW3 and (notoriously) Zend. With most of these other frameworks, as with KO2, it was a quick, straightforward process figuring out how to get from the "hello-world" style demos, to being able to create a useful-but-simple site, to branching out from there. It's taken four times longer to get half as far with KO3 as with KO2.

Judging by comments in the Kohana support forums, I'm not alone in this; the documentation has been roundly panned by virtually all users who've bothered to comment. There's been far too much defensive "no, it isn't bad, you just need to read a bit more, and use the source, Luke" attitude from the project folks. During the KO2 lifecycle, the attitude I understood from their responses was more along the lines of "we know there are a few problems; we're working on them," quickly followed by "take a look at this." I don't know if 3.0 is so much more complex than 2.x that they simply don't have the bandwidth to document things to their previous standards. Frankly, I don't care any more.

I've decided that my existing projects that have been started in Kohana 2.3 will remain there; possibly moving to 2.4 when it becomes the officially-supported release. But I do not plan to invest any more time and effort into Kohana 3.0, at least until the new project has had some time to mature. I fully recognise the potentially self-defeating attitude inherent in that viewpoint. Virtually any open-source project depends on its community to "plug the holes" that the "official" project maintainers don't have time for or deliberately leave as open questions for the community. Well-run community projects are extremely collaborative, communication-rich environments.

Other projects take a "vendor/product" approach, essentially saying "Here's everything you'll need, soup-to-nuts; we'd really appreciate it if you built around the edges and took things deeper in your direction of interest, but the core that we're responsible for is solid." Those "vendors", the Zends and Sensio Labs of the world, have rich open-source offerings that they use as a platform to build (or offer) what actually pays the bills.

While I have a strong philosophical and experiential preference for community-driven (or at least -oriented) projects, there have to be times when I just want to Get Things Done rather than embark on an endless voyage of discovery.3 It is at those times that I'll reach for something that "just works" well enough for me to accomplish whatever it is I'm trying to do at the moment, whether it's to write a book or to bring up a client's Web site. I know and accept that any new tool, or new version of a tool I've used previously, will require a certain amount of time to "get up to speed" on. I don't know everything (or necessarily anything) before I learn it; the most I can hope for (and what I really do insist on) is that things make sense within the logical and semantic frameworks4 that they're expressed in, and that that expression is accessible, complete and correct enough that I can feel like I'm making progress. This invariably involves some sort of documentation; whether a "conventional" or online manual, a Wiki, or a user forum; the format is less important to me than the attributes I mentioned earlier.

Kohana 3.0, as it currently stands, does not meet that standard for me. And so I'm back in a feverish learn-and-evaluate mode with at least two other frameworks. I have projects to finish, and I have several chapters of a book I'm working on that had initially been written around Kohana 2, and will now need to be substantially redone.5

I intend to give each of those new candidate frameworks the same amount of time that it took me to get productive in Kohana 2.x (which was significantly less than the "industry leader," as I've previously mentioned). This is going to be an interesting week and a half or so, in the Chinese-curse sense of the word.

Footnotes:

1. Technical reasons for moving to KO3 included the switch from a model-view-controller (MVC) architecture to hierarchical MVC (or HMVC); if you know what these mean, you should know this is a Very Big Deal™. Additionally, I've found it a Very Bad Thing to tie myself to a legacy (obsolescent) project, and the end of KO2 is being made very plain on the Kohana Web site.(Return)

2. If the new, pre-release Symfony 2 code is as good as the documentation, we're in for a real treat.

3. I am typing this on a Mac rather than a Linux box, after all (though I have three Linux VMs running at the moment).(Return)

4. This implies, of course, that there are "logical and semantic frameworks" sufficiently visible, understandable and relevant to the task at hand. (Return)

5. I certainly don't want to fall into the same trap as a previous series of PHP books, which relied on the obsolete and inadequate Ulysses framework. Ulysses has been justly and productively panned, which has to reflect poorly on the aforementioned book (which I happen to own) and its authors. (Return)

Tuesday, 14 September 2010

Saving Effort and Time is Hard Work

As both my regular readers well know, I've been using a couple of Macs1 as my main systems for some time now. As many (if not most, these days) Web developers do, I run different systems (Windows and a raft of Linuxes) using VMWare Fusion so that I can do various testing activities.

Many Linux distributions come with some form of automation support for installation and updates2. Several of these use the (generally excellent) Red Hat Anaconda installer and its automation scheme, Kickstart. Red Hat and the user community offer copious, free documentation to help the new-to-automation system administrator get started.

If you're doing a relatively plain-vanilla installation, this is trivially easy. After installing a new Fedora system (image), for example, there is a file named anaconda-ks.cfg in the root user's home directory, which can be used to either replay the just-completed installation or as a basis for further customisation. To reuse, save the file to a USB key or to a suitable location on your local network, and you can replay the installation at will.

Customising the installation further, naturally, takes significant additional effort — almost as "significant" as the effort required to do the same level of customisation manually during installation. The saving grace, of course, is that this only needs to be done once for a given version of a given distribution. Some relatively minor tinkering will be needed to move from one version to another (say, Fedora 13 to 14), and an unknowable-in-advance amount of effort needed to adapt the Kickstart configuration to a new, different distribution (such as Mandriva), since packages on offer as well as package names themselves can vary between distros3.

It's almost enough to make me pull both remaining hairs out. For several years, I have had a manual installation procedure for Fedora documented on my internal Wiki. That process, however, leaves something to be desired, mainly because it is an intermixture of manual steps and small shell scripts that install and configure various bits and pieces. Having a fire-and-forget system like Kickstart (that could then be adapted to other distributions as well), is an extremely seductive idea.

It doesn't help that the Kickstart Configurator on Fedora 13, which provides a (relatively) easy-to-use GUI for configuring and specifying the contents of a Kickstart configuration file, works inconsistently. Using the default GNOME desktop environment, one of my two Fedora VMs fails to display the application menu, which is used for tasks such as loading and saving configuration files. Under the alternate (for Fedora) KDE desktop, the menu appears and functions correctly.

One of the things I might get around to eventually is to write an alternative automated-installation-configuration utility. Being able to install a common set of packages across RPM-based (Red Hat-style) Linuxes such as Fedora, Mandriva and CentOS as well as Debian and its derivatives (like Ubuntu), and maybe one or two others besides, would be a Very Handy Thing to Have™.

That is, once I scavenge enough pet-project time to do it, of course. For now, it's back to the nuances of Kickstart.

Footnotes:

1. an iMac and a MacBook Pro. (Return)

2. Microsoft offer this for Windows, of course, but they only support Vista SP1, Windows 7 and Windows Server 2008. No XP. Oops. (Return)

3. Jargon. The term distro is commonly used by Linux professionals and enthusiasts as an abbreviation for "distribution"; a collection of GNU and other tools and applications built on top of the Linux kernel. (Return)

Thursday, 2 September 2010

Patterns and Anti-Patterns: Like Matter and Anti-Matter

Well, that's a few hours I'd like to have over again.

As both my regular readers know, I've long been a proponent of agile software development, particularly with respect to my current focus on Web development using PHP.

One tool that I, and frankly any PHP developer worth their salt, use is PHPUnit for unit testing, a central practice in what's called test-driven development or TDD. Essentially, TDD is just a bit of discipline that requires you to determine how you can prove that each new bit of code you write works properly — before writing that new code. By running the test before you write the new code, you can prove that the test fails... so that, when you write only the new code you intend, a passing test indicates that you've (quite likely) got it right.

At one point, I had a class I was writing (called Table) and its associated test class (TableTest). Once I got started, I could see that I would be writing a rather large series of tests in TableTest. If they remained joined in a single class, they would quickly grow quite long and repetitive, as several tests would verify small but crucial variations on common themes. So, I decided to do the "proper" thing and decompose the test class into smaller, more focused pieces, and have a common "parent" class manage all the things that were shared or common between them. Again, as anyone who's developed software knows, this has been a standard practice for several decades; it's ordinarily the matter of a few minutes' thought about how to go about it, and then a relatively uneventful series of test/code/revise iterations to make it happen.

What happened this afternoon was not "ordinary." I made an initial rewrite of the existing test class, making a base (or "parent") class which did most of the housekeeping detail and left the new subclass (or "child" class) with just the tests that had already been written and proven to work. (That's the key point here; I knew the tests passed when I'd been using a single test class, and no changes whatever were made to the code being tested. It couldn't have found new ways to fail.)

Every single test produced an error. "OK," I thought, "let's make the simplest possible two-class test code and poke around with that." Ten minutes later, a simplified parent class and a child class with a single test were producing the same error.

The simplified parent class can be seen on this page, and the simplified child class here. Anybody who knows PHP will likely look at the code and ask, "what's so hard about that?" The answer is, nothing — as far as the code itself goes.

What's happening, as the updated comments on pastebin make clear, is that there is a name collision between the ''data'' item declared as part of my TableTest class and an item of the same name declared as part of the parent of that class, PHPUnit's PHPUnit_Framework_TestCase.

In many programming languages, conflicts like this are detected and at least warned about by the interpreter or compiler (the program responsible for turning your source code into something the computer can understand). PHP doesn't do this, at least not as of the current version. There are occasions when being able to "clobber" existing data is a desirable thing; the PHPUnit manual even documents instances where that behaviour is necessary to test certain types of code. (I'd seen that in the manual before; but the significance didn't immediately strike me today.)

This has inspired me to write up a standard issue-resolution procedure to add to my own personal Wiki documenting such things. It will probably make it into the book I'm writing, too. Basically, whenever I run into a problem like this with PHPUnit or any other similar interpreted-PHP tool, I'll write tests which do nothing more than define, write to and read from any data items that I define in the code that has problems. Had I done that in the beginning today, I would have saved myself quite a lot of time.

Namely, the three hours it did take me to solve the problem, and the hour I've spent here venting about it.

Thanks for your patience. I'll have another, more intelligent, post along shortly. (That "more intelligent" part shouldn't be too difficult now, should it?)