Wednesday, 28 November 2012

My Singapore Life, Part DXLVI

In what's become a near-weekly occurrence, I got hit by a car while crossing a crosswalk this afternoon. I was halfway through the crosswalk next to the McDonald's at Tampines Mart here in Singapore, walking towards my home in one of the adjacent blocks of flats, when a red cab hit my hip as I was jumping clear of him. He was doing at least 10-15 kph going up to the exit gate, which is typical for the cars here. A raised crosswalk with zebra-stripes, especially with posted "yield to pedestrians" signs, is supposed to mean that pedestrians, of whatever ethnicity, who are already in the crosswalk before a vehicle crosses the adjacent limit line have the right of way. What's left unwritten, in this most racist of officially non-racial "countries", is that the rules only apply to Chinese drivers if the pedestrian is Chinese.

As I indicated, this happens to me at least once a week, and there's a certain…predictability…to the particulars of the offending driver. I saw a Malay woman crossing the same crosswalk about three weeks ago get hit and knocked down, and the driver just kept going.

Singapore is a very popular tourist destination in South Asia, but if you're coming here, be aware of certain sensible ground rules. Stay in the tourist/expat "bubble" areas, generally downtown. Be prepared to spend lots and lots of money. And, whatever you do, don't take any of the tourist propaganda about "multi-cultural, non-racial Singapore" as anything other than the sick joke it has become in recent years. If and when we get a post-PAP Government, that may change. But I'd be very surprised if it was markedly less in any time-frame short of a generation or three.

Tuesday, 20 November 2012

Tap-Dancing on the Shoulders of Giants, Cont'd

(after Bernard of Chartres)

Some days you wake up feeling old. Some things that happen during the day make you feel much, much older.

In times technologically ancient but historically recent, people wrote computer programs by writing code for virtually every instruction to be executed by that program. Later, as capabilities grew (and we learned better), we used underlying systems and libraries to reduce the amount of new code we had to write for increasingly complex programs. Later still, networks and the Internet augmented the resources we could use in our creations. As anyone who's written desktop applications for a recent major operating system can attest, the complexity of these applications can meet and often exceed what one person can keep in his or her head. Consequently, a high priority became "doing more with less", while still delivering an efficient, usable, correct-enough system.

Recent development, especially Web development, has taken that priority and raised it to an imperative. To be a good developer, you don't just have to be able to code well, you must be able to efficiently find, evaluate, learn, use, and collaborate on further development of these components. Write code once, that's fine. Write code twice, write it again in a more easily reusable form, and rewrite the first two to use the new one. What used to take a team months and thousands of lines of code can now be accomplished by a single developer in days or weeks, with dozens to scores of lines of more readily understandable code — that uses ever-more-complete and -complex underlying libraries.

I was reminded of this yet again by a recent blog post by David Guttman (with accompanying Heroku demo and source code). There's a moderate amount of framework-type boilerplate around it, but the source file I linked does all the heavy lifting.

It's less than fifty source lines of CoffeeScript. Fifty. Lines.

Not since the heyday of APL has a fifty-line program done so much. However, fifty lines of CoffeeScript is far more readily understandable by mere mortals than fifty characters of APL. (Browse this to get an idea of what I'm talking about.)

When I was a wee lad in uni, I wrote code, in ALGOL 68 on a Burroughs B6700, to drive a Tektronix graphics display to produce graphics similar to Guttman's. My program, when complete, nearly filled two 1600-character-per-inch 9-track tapes. (Yes, those erratically whirring reels you see in old newsreels and oddball posts on YouTube.) Each tape stored about 40 megabytes. (Less space than the music file I'm listening to at the moment; about 1/8 of a CD altogether.)

That project took the better part of two school quarters, about four months.

You can still spend months researching if you have to learn some Brave (totally) New Domain and a Strange New Language at the same time, but to actually write a fifty-line program? Days? Hours?

And that brings me to the point of the title; too many of us, not just younger developers, now take these new tools for granted. The problem in doing so, in not understanding how things were done before, blinds us to important details about the now-modern tools we use today. By not having a clear understanding of the problems solved by our tools, by not understanding the tradeoffs and alternatives involved in developing the tools we use every day, it's practically impossible for us to make the most effective use of those tools. And that can have some serious, project-endangering consequences, obvious in hindsight to those who have hit them at least once or twice..

We all stand on the shoulders of giants. Don't get too cocky with your tap-dancing, lest you follow in the shuddering footfalls of one Wile E. Coyote.

Friday, 26 October 2012

Take Only Pictures; Leave Only Footprints

If you've ever visited a National Park or many state parks in the United States, or been involved with a nature-oriented community group, you've likely heard that saying many times. For those who haven't, the meaning should be obvious: leave the shared place that you're moving through in at least as good condition as you found it, so that the next people to travel that way can enjoy it as much as you did.

That applies to environments other than the great outdoors, of course. The phrase popped into my mind earlier today as I was poking at a jQuery plugin to add a context menu to a Web app I'm developing.

It's great that there's so many free software tools out there; I've been using them for more than 25 years, and I've developed more than a few. Sometimes, however, one is reminded of the wisdom of Oliver Wendell Holmes when he said, "Learn from the mistakes of others… you can't live long enough to make them all yourself." But in the grand software tradition, we often give it our best effort.

This is a reasonably well-known plugin, one of several that offer to solve a common problem in more-or-less similar fashion. And if you use it precisely in the way that the author expected, it does the job. But even then, if you're using it in a Web app or site that customers are paying money for, you might hope that they never hit the "View Source" menu item in their browser. And, to be fair, this is not intended as a slam against Matt Kruse or his code; most of the other plugins I've looked at in the last couple of days have at least as many "quirks" and assumptions.

"It's just a context menu," I hear you say. "What could possibly go wrong?…go wrong?…go wrong?…(sound of tape snapping)" (with apologies to Westworld, a movie with many lessons on the art and craft of software development.)

If you're adding the menu to your page when it's first loaded, and you want a static menu, with the options the same for all uses of the context menu, fine. If you want a more dynamic menu, and don't mind attaching functions to the menu that know enough of the detailed state of your page/app that they can generate the properly-adjusted menu items each time the menu is displayed, that works, too. But the main assumptions are that the plugin's main method will be called once and only once on a page, and that you really don't care about the markup being added to the end of your page. The first assumption, particularly in a rich front end, is likely to be naïve; the second is, bluntly, an assault on your professionalism.

What's so bad? Here's the plugin's output for a very simple menu, reformatted for clarity:

<table cellspacing="0" cellpadding="0" style="display: none;">
  <tbody>
    <tr>
      <td>
        <div class="context-menu context-menu-theme-vista">
          <div class="context-menu-item " title="">
            <div class="context-menu-item-inner" style="">One</div>
          </div>
          <div class="context-menu-item " title="">
            <div class="context-menu-item-inner" style="">Two</div>
          </div>
          <div class="context-menu-item " title="">
            <div class="context-menu-item-inner" style="">Three</div>
          </div>
        </div>
      </td>
    </tr>
  </tbody>
</table>

<div class="context-menu-shadow" style="display: none; position: absolute; z-index: 9998; opacity: 0.5; background-color: black;"></div>

If you read HTML and CSS, you're probably nodding your head, thinking "that looks simple enough; hey, hang on a minute…" That "hang on" is where you start seeing any of several issues:

  1. We have a table, containing a (rarely-used) tbody, containing a single tr, containing a single td, containing a series of nested divs which contain the menu items. If he'd implemented the entire menu as a table, he might have been able to claim that he was trying to support 1997-era browsers. (Would jQuery 1.4 work in Netscape Navigator 4?) Failing that, it's a mess. Nesting divs (or more semantically appropriate elements) has been a Solved Problem for a decade or so.

  2. Even though this is implicitly intended to be fire-and-forget markup added to the DOM at page-creation time, it's disturbing that there are no identifiers for any of the table elements. If you wanted to, say, remove the table later, you'd have to use code like

    $('.context-menu').parent().parent().parent().parent().remove()
    Ick. And then you'd have to go clean up the shadow div separately.

  3. Speaking of the shadow div being separate from the menu-enclosing table, why not wrap both in an (identified) div, so that you (and your clients' code) can treat the menu as a single entity?

  4. Doing that would also let you dynamically delete and replace the menu, based on changes in the state of your app, without needing to define menu-item handler functions that violate the Principle of Least Knowledge.

  5. Another benefit of that would be if your context menu was generated more than once (possibly because the content needed changing or items needed disabling), you'd never have more than one copy of the menu in the DOM. As it is now, you can have an arbitrarily large number, with only the most recently-added being the "active" one. Ugh. This would be more likely to happen on a dev box running a test suite, but still. Ugh.

  6. Particularly in a behaviour-driven or test-driven development (BDD or TDD) environment, being able to test/validate the markup and the item-handler logic separately as well as together is important. Doing so with this plugin (and again, to be fair, most of the others) eliminates the normal-use workflow from consideration.

  7. One feature of this particular plugin is that it supports having you define your own HTML for the menu and pass it in. But the example given is too simple to be useful as a menu. Browsing the plugin source code seems to indicate that there is no event-handler support for menus defined in this manner; you'd have to iterate through your context menu items, assigning event handlers for at least the click event. If you're going to do all that work, why bother with this plugin?

  8. A table and divs. Enclosed and in parallel. (Careful wiping that green sludge off your monitor; that's your brain that just exploded.)

Menus are "important" enough that HTML 5 has its own set of elements dedicated just to marking up menus. However, sensible, reasonably semantic standard patterns for use with HTML 4 and XHTML 1.x have evolved that address the issues I've mentioned (among many others). The markup for the example menu above could have been written as:

  <div class="context-menu-container" id="someId">
    <ul class="context-menu context-menu-theme-vista">
      <li class="context-menu-item">
        <span class="context-menu-item-inner">One</span>
      </li>
      <li class="context-menu-item">
        <span class="context-menu-item-inner">Two</span>
      </li>
      <li class="context-menu-item">
        <span class="context-menu-item-inner">Three</span>
      </li>
    </ul>
    <div class="context-menu-shadow"></div>
  </div>

One outermost div, with an id so your plugin doesn't get confused if you have multiple menus on a page, but one div so you can work with it as a single unit. An unordered list containing list items corresponding to the menu items, since that's the closest HTML4/XHTML 1 comes to HTML5's menu semantics. The Script that builds the thing can adapt the styles and attributes at the class+id CSS level, eliminating the need for hard-coded monsterpieces such as that style attribute for the table.

If you're going to write a jQuery plugin, live in jQuery. You can dynamically modify styles, attributes, content, even whole sections of how your page is rendered in the browser, without touching the basic markup. You can put the style bits that you intend to remain constant into a CSS file and link those styles to individual DOM fragments via classes and ids. That also keeps your HTML and CSS clean and cacheable.

If we don't write clean code, then having a lightning-fast browser on a petabit-Ethernet connection won't matter; users and reviewers will still complain that "your app is slow". Think before, as, and after you code, and remember:

First, do no harm.

Sunday, 14 October 2012

It's Not Just "JavaScript" Anymore

Late to the party again, as usual, but…

I've been reading up on ECMAScript 5, the standardised language historically and popularly known as "JavaScript". Even though I'm doing most of my work in CoffeeScript now, ECMAScript is relevant because that's what CoffeeScript compiles down to. (At least, to a 'safe', 'good bits' subset thereof). So here I've gone and mentioned three different names for "very similar" languages in the same paragraph. When I mean one in particular from here out, I'll name it; otherwise Script refers to all three to the best of my knowledge. Anyway...

One of the things I find myself doing, and reading in just about everybody else's code in most languages, is a prudent sanity-check validation before doing something "dangerous" that profoundly changes the state of the system based on the state of various objects within it. A decidedly non-trivial part of that is usually simple validation of individual property values; is the someFutureDate value actually greater than the value for Now? Should we check that every time we're about to make an important decision based on the value of someFutureDate? What a pain…especially knowing that the one time you forget to do that check often turns out to be the root cause of a critical bug?

ECMAScript 5 has completely rethought how properties on objects are implemented. They can be made read-only, non-enumerable, non-modifiable once set. Perhaps more interestingly, they can now be defined either using a simple value as before, or by defining getters and setters, as is done in several other languages. What this buys you, at the cost of a bit of relocated complexity, is the ability to validate assigned values without the assigning code doing anything other than assigning a value to the property. The domain logic relevant to a specific property of an object can now be coupled more tightly to the property and the details less visible or relevant to outside code, in effect creating a conceptual "mini-class" around a property and its getter/setter logic that is transparent to outside code.

This and several other features of ECMAScript 5 now make it easier to write nice, fine-grained, SOLID code in ECMAScript 5 than was previously possible in any Script dialect. Huge win all around.

Which brings up questions. How could support for property descriptors and the like be added to CoffeeScript? Should they be added? Would support for these underlying ECMAScript features require changes to the CoffeeScript compiler itself, or could it be achieved less intrusively?

Sunday, 30 September 2012

Damn You, WHICH Auto-Correct?

If you've had a smartphone for any length of time, you're no doubt intimately familiar with the concept behind the site Damn You, Auto Correct! even if you've not yet visited it. (But you should.) :-)

Perhaps more subtle than the humorous mismatches between intent and attempted repair is the philosophy behind various implementations of auto-correct. I've recently been using a Samsung Galaxy Note exclusively after having had various iPhones for several years, and for all the similarities between the two, one thing is glaringly different: the way auto-correct works.

At first I was just drying out from the familiarity Flavr-Aid, thinking that the iOS spell-checker was simply better. And then I started to notice that the Android spell-checker followed a predictable pattern. The iOS spell-checker, as on iPhones and my current iPad 2, follows another. If you aren't aware of that, you'll be very frustrated when moving from one to another.

The iOS spell-checker, when it can't match your registered key-taps to a complete word in its dictionary, assumes that you've fat-fingered a misspelling, and so the (one and only by default) suggestion it offers is based on that assumption. There are linguistic principles that determine how it makes that decision. While those have been understood for some time, it's been only relatively recently that a real-time-capable implementation has been both (relatively) affordable and easily portable. Auto-correct on the iPhone has been steadily improving, likely due both to improved algorithms and more powerful processing capabilities. OK, fine; that's what people who've only used iOS lately expect; it works reasonably well as expected, what could possibly be different?

Android hasn't always had the top-tier processing capability or memory of an iPhone 4 or 5 to throw at anything, let alone spell-checking. (Recent high-end phones are extremely competitive, but that's a whole other post.) If they can't or shouldn't throw enough resources at spell-check to actually correct in real time, then what can be done?

Take the other fork in the road, of course. Don't assume auto-correct to be spell-checking; instead, use it to reduce the number of key-taps needed to type longer words, à la TextExpander. Of necessity, this will involve a fair amount of "real" spell-checking. Android's spell-checker seems to assume that any typos are typos in the first few letters of a longer word. Further, it seems to assume that the first letter typed in a new word is always correct, and rarely if ever shows alternate words with a different first letter. (Android, unlike iOS, shows a list of candidate words, allowing the user to select among them with a single tap — further speeding rapid [if initially correct] typing.)

Which is "better" depends on your preferences and the accuracy of your typing on the device you are using. With my Lincoln Log-like fingers, I could maintain an effective 2-3 wpm rate on an iPhone 4; nearly double that on an iPad. After my iPhone 4 was stolen (apparently for parts as it was never again turned on), I bought the cheapest 3G hotspot-capable phone I could find at Lucky Plaza, which turned out to be a grey-market HTC Explorer. That was easily one of the two most painful experiences I have ever had in over 15 years of using mobile phones. A very, very kind and loving soul loaned me a Samsung Galaxy Note running Android 4.0 ("Ice Cream Sandwich"). Comparing the Explorer to the Note was slightly more comical (and less fair) than putting a 1976 Chevy Chevette onto a track next to a Bugatti Veyron and seeing who can finish 20 laps before the other. (If the Bugatti gives the Chevette a 15-lap head start, my money's still on the Bugatti. I've owned a Chevette.)

I now find that I type comparably fast on the (~5.3-inch) Galaxy Note as on the 10-inch iPad. I'll be returning the borrowed Note soon (geologically speaking, at least); I'm just waiting to be able to have a good demo of an iPhone 5 to compare, and see which I really prefer. The new iPhone is going to have to be at least as good as the fans and reviews say it is; this Galaxy Note is sweet…

Saturday, 8 September 2012

Stubs Aren't Mocks; BDD Isn't TDD; Which Side(s) Are You On?

I just finished re-reading Martin Fowler's Mocks Aren't Stubs from 2007. I wasn't as experienced then in the various forms of agile development as I am now, so couldn't quite appreciate his perspective until somebody (and I'm sorry I can't find whom) brought up the paper again in a tweet a month or two ago. (Yes, that's how far behind I am; how do you do when you're working 15- to 18-hour days, 6 or 7 days a week for 6 months?)

In particular, the distinctions he draws between "classical" and "mockist" test-driven development (TDD), and then between mockist TDD and behaviour-driven development (BDD) are particularly useful given the successes and challenges of the last dozen or so projects I've been involved with. I wouldn't quite say that many teams are doing it wrong. They/we have been, however, operating on intuition, local folklore and nebulously-understood principles gained through trial-and-error experience. Having a systematic, non-evangelistic, nuts-and-bolts differentiation and exploration of various techniques and processes is (and should be) a basic building block in any practitioner's understanding of his craft.

Put (perhaps too simply), the major distinction between classic and mockist TDD is that one focuses on state while the other focuses on specific, per-entity function; projects that mix the two too freely often come to grief. I believe that projects, especially midsize, greenfield development projects by small or inexperienced teams should pick one approach (classic or mockist TDD, or BDD) and stick with it throughout a single major-release cycle. You may credibly say "we made the wrong choice for this product" after getting an initial, complete version out the door, and you should be able to switch the next full release cycle to a different approach. But if you don't know why you're doing what you're doing, and what the coarse- and fine-grained alternatives are to your current approach, you can't benefit from having made a conscious, rational decision and your project thus can't benefit from that choice.

Anything that gives your team better understanding of what you're doing, why and how will enhance the likelihood of successfully delivering your project and delighting, or at least satisfying, your customers. Even on a hobby project where your customer is…you yourself. Because, after all, your time is worth something to you, isn't it?

Tuesday, 28 August 2012

Even Typos Can Teach You Something: BE CAREFUL!

CoffeeScript is an interesting language, in both senses of that word. Whereas it tries to (and largely succeeds in) insulating the hapless coder from what Douglas Crockford calls "the bad parts" of JavaScript, it does not filter out all of the mind-blowing bits of JavaScript. Arrays are a good example.

Think back to your CS 101 class (or your K&R for you self-taught folks). What is an array?

In most languages, an array is a numbered sequence of elements, using (usually) sequential non-negative integers for identifying a specific element. Many statically-typed languages (such as C and Java) require that array elements each be of the same type; dynamic languages such as Python relax that to one degree or another.

But CoffeeScript and its underlying JavaScript (which I collectively call just Script) suck that up and then do some anatomically improbable things with/to it. Consider this bit of CoffeeScript interaction from a terminal (in Mac OS X 10.8, CoffeeScript version 1.2.0):

Jeffs-iMac:tmp jeffdickey$ coffee
coffee> foo = []                      # declare an empty array
[]
coffee> foo[4] = 'a'                  # assign offset 4; 0-3 have undefined values
'a'
coffee> foo[2.718281828459045] = 'e'  # Floating-point array index? Why not?
'e'
coffee> foo[-2] = 'b'                 # Negative numbers are just fine, too
'b'
coffee> foo['c'] = 27.4               # A string can be an index. It's still an array.
27.4
coffee> foo                           # What do we have now?
[ ,
  ,
  ,
  ,
  'a',
  '2.718281828459045': 'e',           # Non-sequential offsets look a lot like object fields
  '-2': 'b',
  c: 27.4 ]
coffee> foo.length
5
coffee>                               # Control-D to get out of the REPL
Jeffs-iMac:tmp jeffdickey$

So? What's the upshot?

Well, one thing many languages do to/for you is to throw an exception when your program gets too fast and loose with its indexing into an array. In Script, that doesn't happen; any scalar is a usable index, and if you pass something in that isn't a scalar (like an object), it'll have its toString() method called to generate an index value. And did I mention that this is all case-sensitive? Your typo opportunities are limited only by your imagination and by the accuracy of your fingers…

In case it wasn't obvious from the preceding incredulous rant, I really wonder why this "feature" is in the lnaguages; would it have really been that coercive to say "I'm sorry, Dave; I'm afraid I can't do that. Perhaps you'd rather use an object hash instead?'

Thursday, 19 July 2012

An Immodest Proposal: Show Me the Code

I've been doing a lot of CoffeeScript work lately, along with Ruby with Rails and Sinatra. Especially with regards to my CoffeeScript, I've moved away from my recent Ruby-coloured "your code should be all the documentation you need" philosophy.

I've found all kinds of uses for Markdown, particularly including docco.coffee, written by the developer of CoffeeScript itself. Docco "is a quick-and-dirty, hundred-line-long, literate-programming-style documentation generator. It produces HTML that displays your comments alongside your code. Comments are passed through Markdown, and code is passed through Pygments syntax highlighting." It may not be the greatest thing since sliced bread, but the output does make complex yet deliberately-written code much clearer.

The downside to well-commented code, of course, is that it gets bulky. One of my larger CoffeeScript files has ~140 lines of actual code, engulfed in a source file that currently tops 330 lines. Ouch.

I was working on it just now, and an idea popped into my head: the "proposal" of the title.

Comments should be selectively elided or folded, in a fashion similar to the code folding feature offered by most modern editors.

I meander around between four different editors on my Mac (TextMate, Sublime Text 2, Komodo IDE and MacVim), and none of them appear to support such a feature out-of-the-box.

Does anybody know of a plugin for any of the above that does this?

Saturday, 14 July 2012

Getting Your Stuff Done, or Stuff Done To You

This is the response I wanted to leave to "MrChimei" on the spot-on YouTube video, "Steve Jobs Vs. Steve Ballmer". Since YouTube has such a tiny (but understandable) limit on comment size, a proper response would not fit. Therefore...


Let me put it this way. It doesn't matter whether you're speaking out of limited experience, or limited cognition, or what; your flippant attitude will not survive first contact with reality (to paraphrase von Moltke).

I'm a Windows developer who's been developing for Windows since Windows 1.0 was in early developer beta, on up to Windows 8. I had nearly ten years professional development experience on five platforms before I ever touched Windows. I had three stints at Microsoft back when that was cool, and sold most of my stock when it was still worth something.

I've also supported users of Windows and various operating systems, from groups of 3-5 small businesspeople on up to being comfortably high in the operational support pecking order in a Fortune 100 company. I've seen what helps and doesn't help intelligent non-geeks get their work done.

Both in that position, and in my own current work, I've observed and experienced order-of-magnitude-or-better differences in productivity, usability, reliability, supportability… all in Apple's favour. I've worked with and for people who became statistics junkies out of an emotional imperative to "prove" Windows better, in any way, than other systems. The next such individual I meet who succeeds, out of a sample of over 20 to date, will be the very first.

In 25 years, I have never experienced a Windows desktop machine that stayed up and fully functional for more than approximately 72 hours, *including* at Redmond, prior to a lightly-loaded Windows 7 system.

In the last 6 years of using Macs and clones half-time or better, I have never experienced a Mac that failed to stay up and working for less than a week. In the last five years, my notes show, I've had two occasions where a hard reset to the Mac I'm typing this on was necessary; both turned out to be hardware faults. Prior to Windows 7, any Windows PC that did not need to be hard-rebooted twice in a given fortnight was a rarity. Windows 7 stretched that out to 6 weeks, making it by far the most stable operating system Microsoft have shipped since Windows NT 3.51. (Which I will happily rave about at length to any who remember it.)

For many years, I too was a Windows bigot. The fact that Unix, then OS/2, then Mac OS had numerous benefits not available in Windows was completely beneath my attention threshold. The idea that (on average over a ten-year period) some 30% of my time seated at a Windows PC was devoted to something other than demonstrably useful or interesting activity was something that I, like the millions of others bombarded by Ziff-Davis and other Microsoft propaganda organs, took as the natural order of things.

Then I noticed that Mac users were having more fun. "Fine," I thought, "a toy should bring amusement above all." Then I noticed that they were getting more and better work done. "Well," I said to myself, "they're paying enough extra for it; they should get some return on their investment. I'm doing well enough as is."

And then, within the space of less than a year, all five of my Windows systems were damaged through outside attack. "Why," I asked. "I've kept my antivirus current. I've installed anti-spyware and a personal firewall in addition to the (consumer-grade) router and firewall connecting me to the Internet. I don't browse pr0n or known-dodgy sites. I apply all security patches as soon as they're released. Why am I going to lose this development contract for lack of usable systems?"

I discovered a nasty little secret: it's technically impossible to fully protect a Windows PC from attacks, using tools that a reasonably-bright eight-year-old can master in a Saturday afternoon. People responsible for keeping Windows PCs have known this for over a decade; it's why the more clueful ones talk about risk mitigation than prevention, with multi-layered recovery plans in place and tested rather than leaving all to chance. For as long as DSL and cable Internet connections have been available, it's taken less time to break into a new, "virgin" Windows PC than to fully patch and protect it against all currently-likely threats.

People used to think that using cocaine or smoking tobacco was healthy for you, too.

What I appreciate most about the Mac is that, no matter what, I can sit down in front of one and in a minute or less, be doing useful, interesting work. I don't have the instability of Windows. I don't have the sense that I'm using something that was designed for a completely different environment, as Windows too closely resembles the pre-network (let alone pre-Internet) use of isolated personal computers. Above all, I appreciate the consistency and usability that let me almost forget about the tools I'm using to work with my data, or with data out in the world somewhere, than on what I'm trying to accomplish.

One system treats its users as customers, whose time, efficiency and comfort are important and who know they have choices if they become dissatisfied. The other platform treats its users as inmates, who aren't going to leave no matter what... and if that's true, then quality sensibly takes a back seat to profitability.

Which would you recommend to your best friend? Or even to a respected enemy?

Sunday, 6 May 2012

Rules can be broken, but there are Consequences. Beware.

If you're in an agile shop (of whatever persuasion), you're quite familiar with a basic, easily justifiable rule:

No line of production code shall be created, modified or deleted in the absence of a failing test case, for which the change is required to make the test case pass.

Various people add extra conditionals to the dictum (my personal favourite is "…to make the test case pass with the minimal, most domain-consistent code changes currently practicable") but, if your shop is (striving towards being) agile, it's very hard to imagine that you're not honouring that basic dictum. Usually in the breach at first, but everybody starts in perfect ignorance, yes?

I (very) recently was working on a bit of code that, for nearly its entire existence over several major iterations, had 100% test coverage (technically, C0 or line coverage) and no known defects in implemented code. It then underwent a short (less than one man-week) burst of "rush" coding aimed at demonstrating a new feature, without the supporting tests having been done beforehand. It then was to be used as the basis for implementing a related set of new features, that would affect and be affected by the state of several software components.

That induced some serious second-guessing. Do we continue mad-hatter hacking, trusting experience and heroic effort to somehow save the day? Do we go back and backfill test coverage, to prove that we understand exactly what we're dealing with and that it works as intended before jumping off into the new features (with or without proper specs/tests up front)? Or do we try to take a middle route, marking the missing test coverage as tech debt that will have to be paid off sometime in The Glorious Future To Come™? The most masochistic of cultists (or perhaps the most serenely confident of infinite schedule and other resources) would pick the first; the "agile" cargo-cultist with an irate manager breathing fire down his neck the second; but the third is the only pragmatic hope for a way forward… as long as development has and trusts in assurances that the debt will be paid in full immediately after the current project-delivery arc (at which time increased revenue should be coming in to pay for the developer time).

The moral of the story is well-known, and has been summed up as "Murphy " (of Murphy's Law fame) "always gets his cut", "payback's a b*tch" and other, more "colourful" metaphors. I prefer the dictum that started this post, perhaps alternately summed up as

Don't point firearms at bits of anatomy that you (or their owner) would mind losing. And, especially, never, ever do it more than once".

Because while even the most dilettante of managers is likely to have heard Fred Brooks' famous "adding people to a late project only makes it later" or his rephrasing as "nine women cannot have a baby in one month", too few know of Steve McConnell's observation (from 1995!) that aggressive schedules are at least equally delay-inducing. If your technical people say that, with the scope currently defined, that something will take N man-weeks, pushing for N/4 is an absolutely fantastic way to turn delivery in N*4 into a major challenge.

Remember, "close" only counts in horseshoes and hand grenades and, even then, only if it's close enough to have the intended effect. Software, like the computers it runs on, is a binary affair; either it works or it doesn't.

Wednesday, 18 April 2012

It's Not Amtrak: Enjoy Riding the Rails

Historical note: Most of this post was written in the first week of October, 2011.

I've spent much of the last three weeks wrapping my head around Rails and Ruby after several years of mostly PHP with a couple of Groovy/Java side trips. It's been interesting, and a lot of fun so far.

I'm new to Ruby on Rails, but not to Ruby itself — though I took a long breather when Rails first became trendy. It was not at all clear to me at the time whether the (then-)popular trend among both the media and technology-involved people who Should Know Better to treat Ruby as essentially a synonym for "Ruby on Rails" was going to kill the language for other uses. It was quite clear to me that a huge chunk of early Rails culture was built around a cult-of-personality dedicated to Rails' primary creator, David Heinemeier Hansson. I try very hard to avoid cults; they almost invariably either spectacularly self-destruct (see Jonestown, Guyana) or get co-opted to serve unrelated or even opposing purposes (see State religions).

In about the last year or two, Rails has visibly grown up, both in capability and in outlook. There are other people who've stepped up and delivered great, widely-used code (does anybody know what happened to _why?), and Rails is now used in lots of systems both interesting and serious. I'd been thinking that maybe I ought to start getting myself up to scratch, particularly as I've been increasingly disenchanted with both the technical and sociopolitical aspects of PHP. The feelings of excitement, progress and everybody-pitch-in community that sustained PHP through its phenomenal growth in the last decade are, at best, flickering.

The final kick in the shorts was an interview cycle for new work with a great group that does, among other things, quite a bit of Rails work. Thus motivated, I grabbed copies of Agile Web Development with Rails, Fourth Edition and The RSpec Book and started going through them.

And promptly went out in search of better books. As of April, 2012, the ones I refer to most often are:

  • Russ Olsen, Eloquent Ruby. Addison-Wesley Professional, 2011. Nearly 400 excellent pages that will help your Ruby code be more effective, expressive, pleasant to work with and easier to maintain; pick any five.
  • Obie Fernandez, The Rails 3 Way; 2nd Edition. Addison-Wesley Professional, 2011. An encyclopedic yet guided introduction and continually-usable reference to Rails 3. Does not cover everything in 3.1/3.2, but will leave you comfortably able to adapt to the new changes.

One of the things I appreciate about Rails is the size and openness of the communities that have grown up around it. Everywhere I look, I find new code and tools to do useful things. I'm still on the upward slope of the "each Neat New Thing points to n other Neat New Things which point to…" curve, and I expect to be for quite some time. There are useful, useable if not always perfectly ideal tools to do things, and those tend to get adopted by (large chunks of) the community as de facto working standards. The whole "gem" ecosystem is chalk-and-Friday different (and dramatically better) than its PHP analogue; it's almost as though the "gems" folks looked at nearly everything PEAR gets wrong and made the conscious, deliberate decision to do the opposite. From the perspective of a newbie who's had painful experience with other systems, it Just (Seems Like It) Works.

Now, I'm sure that the rose-coloured spectacles will eventually pop out of their frames and get ground into dust. I'll find things I'm doing or using that just seem so bone-headed that I'll think, "I'm obviously well outside the ideal target audience for this". I seriously doubt that I'd think "This part of the system is so badly broken that it can't possibly be fixed"; again, this is a relatively new stack of tools with large, creative communities built around them. There will be alternatives to the particular whatever-it-is that's causing me pain, and there is a virtual certainty that I will be able to pull the offending piece out of my stack, put the new thing in, and continue working forward more effectively. I'm also convinced that, in the unlikely event that I'm such a black-swan corner case that I get motivated to write my own replacement, that it'll get visibility and feedback from others (if only to say "Hey, you just duplicated quteobscurename; go have a look at www.github.com/JoePalooka/quteobscurename — but hey, nicely done"). That would be more useful — and far more encouraging — than the usual PEAR mishmash of components that don't even all really work with the current language, written by people who believe that terse, sloppy, untested code is self-documenting.


Addition: April, 2012

And so it has come to pass; over the course of developing what will very soon be a commercial Web-based service, at least three initially-chosen components have been replaced by other, better and/or more suitable components. In all cases, the time involved in flailing around and deciding "this really has to go now" was several times greater than the hour or two it took to actually replace the old code with the new. That's been one of the big wins promised by object-oriented software development for roughly half a century. (Depending on who you ask, the first object-oriented language was either ALGOL X, in 1965, or Simula 67, in 1967. C++? Well over a decade later, and that's still older than most of its coders today.)

'Least Knowledge' May Be More than You Think

I was chatting on IRC the other day and got into a discussion that stuck in the back of my mind until now. This guy was a fresh convert to the ideas of SOLID and to the Law of Demeter (or Principle of Least Knowledge). Now, these are principles I hold at least as strongly as the average experienced developer, and we eventually agreed to disagree over the propriety of code such as a line I just wrote (which in fact prompted this post).

Consider the Ruby code:

    data[:original_content] = article.content.lines.first.rstrip

This says:

  1. "take whatever your article is;
  2. get whatever its content attribute is or method returns;
  3. (assuming that is a String or something that can be treated as one,) get its lines enumeration;
  4. get the first entry in that enumeration;
  5. (again, assuming it's a String or workalike,) strip any trailing whitespace from that string; and
  6. stuff it into the collection data, indexed by the symbol :original_content."

It's not hard to argue that this is a bad code smell, being a long list of assumed return types and so on, except for one thing: in my view, the LoD does not cover sequencing a series of standard library calls.

Look at the code again. The bit that references my code is the fragment

  data[:original_content] = article.content

That fragment is clean; article is an object and content is an attribute of or method on that object. This code assumes that the value of article.content is a standard String. As a standard class, it's virtually guaranteed not to change in backwards-incompatible ways over the useful life of the code. What then? Standard library calls!

  1. String#lines returns a standard Enumerator instance;
  2. Enumerable#first (via the earlier Enumerator) returns a standard String instance again; and finally
  3. String#rstrip cleans off any trailing whitespace.

Nowhere are used any internal APIs that could change. Breaking that one line apart into 5 would produce repetitive, non-semantic, anti-Ruby-best-practices code of the style pandemic on large Java projects. We're in Ruby (and Rails) for a reason: part of that reason is that the language idioms encourage an almost literary expressiveness that makes more explicit the reality that a program is a conversation between the humans developing and maintaining it, that just happens to be executable by a computer. Literate, semantic programming is like behaviour-driven development; once you've wrapped your head around it and see how much better a programmer it makes you, you really don't want to go back to BDUF jousting in Java or PHP.

On further review...

Now, if you argue that the LoD is an aid in reducing coupling between classes, which reduces per-statement complexity, I can see that. But, in the example above, we're back to the fact that there is only one dot before you start jamming together standard library calls. Had I instead written:

content = article.content lines = content.lines first_line = lines.first data[:original_content] = first_line.rstrip

then I'd argue that I've introduced three unnecessary local variables when only one (the second) could foreseeably change. If I later modified the return value from article.content to something other than a Stringlike object, then probably the assignment to lines would have to change. But only that one line — the same level of change that would be required in my current code.

Comments?

Tuesday, 7 February 2012

If At First You Don't Succeed, Try, Try Again

In a very real sense, that's a big part of what we do in test-driven or behaviour-driven development, isn't it?

It doesn't only apply to classes and projects. Teams work that way, too. You can have someone who, on paper, seems just great to help you out with the cat-herding problem that's a large part of any significant project. The interview is fantastic; he's 110% buzzword-compliant with what (you think) you need The New Hand to be able to pick up and run with. There might be a few minor quirks, niggles; a voice in the back of your mind standing up and clearing its throat obnoxiously, and when that doesn't get your attention, starts screaming loudly enough that surrounding folk reach for ear protection. When that doesn't get your attention — because, after all, you're talking to such a fine candidate — the voice might well be forgiven for giving up in disgust and seeking the farthest corner of your brain to sulk in. And give a good kick to, every now and again.

And then Something happens, usually a string of increasingly serious, increasingly rapidly-occurring Somethings, and you realise that this really isn't working out so well after all. The candidate-turned-colleague may well be a very nice person, but you just can't communicate effectively. Or the "incremental" test that she writes requires changes to a dozen different classes that, in turn, break 15 existing tests. If that doesn't make you go 'hmmm...', then maybe your other team members are concerned about his professionalism: consistently showing up late to meetings; giving a series of increasingly outlandish reasons for not coming into the office and working from home despite having previously promised not to, and so on. Seemingly before you know it, the project is at a virtual standstill, team dysfunction has reached soap-operatic heights, and at least one vital member of the team starts running around and shouting "off with her head!" All of which does absolutely wonderful things to your velocity and project schedule. Or maybe not so wonderful.

And the funny thing is that, at about that time, you remember that voice in the back of your mind that was trying to point out that something was rotten in this particular virtual state of Denmark from the beginning; you were just too optimistic, and likely too desperately stubborn, to recognise it at the time. If you're going to go about hiring someone, have a good rapport with your instincts. Know when to walk away; know when to run.

Long story short, we are looking for an awesome new team member. As you might have gathered from the last few blog posts, we're a Ruby on Rails shop, so you'd imagine that heavy Rails experience would be tops on our list, right? Not so fast, friend. I/we subscribe to the philosophy that source code (and the accompanying artefacts) are part of the rich communication that must continuously thrive between team members, past, present and future. These artifacts are unusual in that they are in fact executable (directly or indirectly) by a computer, but they're primarily a means of communication. As such, ability to communicate effectively in English trumps wizard-level Ruby skills. If we can't communicate effectively, it doesn't matter how good you are, does it?

What we're looking for at the moment can be summarised thusly:

1) Demonstrated professionalism. This does not mean "a long string of successful professional engagements", though obviously that doesn't hurt, especially if they individually lasted more than the sensible minimum. Do you generally keep your word? (Shtuff happens to everybody; nobody's perfect, but overall trends do matter.) Are colleagues secure in the knowledge that you routinely exceed expectations, or are they continuously making contingency plans?

2) Fluent, literate standard English at a level capable of understanding and conveying nuance and detail effectively and efficiently. This would be first on the list, were it not for the series of unfortunate events that led to this post being written at all. It is an absolutely necessary skill for each member of a truly successful team, and a skill that is becoming depressingly rare.

3) Customer-facing development experience, ideally in a high-availability, high-reliability environment. We're building a service that, if we're successful, people are going to "just assume" Works whenever they want it to, like flicking on the lights when you walk into a room at night. Those of us who've been around for a while remember the growing pains that many popular services have gone through; we have no plans whatever to implement a "fail whale".

4) Ruby exposure, with Rails 3 a strong preference. You can fill in along the edges as we go along, but if we can't have an informed discussion of best practices and how to apply them to our project, you're (and we're) hurting. If you're unclear on the concepts of RSpec or REST or Haml or _why, you're going to have to work really, really hard to convince us that we're not all wasting time we can't afford to. And, finally,

5) Deep OO experience in at least two different languages. If you can explain and contrast how you've used different systems' approaches to design by contract, encapsulation, metaprogramming, and so on, and have good ideas on how we can apply that to making our code even more spectacular, rock on. "Why isn't this #1 or #2 on your list," you might ask. If you can convince me why it should be, and why your skills are so obviously better than anybody else I'm likely to have walk in the virtual door... then you've already taken care of the others, anyway.

We're a startup, so we're not made of money. But since we're a startup, money isn't all we have to offer the right person. Know who that might be? Drop me a line at jeff.dickey at theprolog dot NOSPAM com. Those who leave LinkedIn-style "please hire me" comments to this post will be flamed, ridiculed, and disqualified.

Tuesday, 27 December 2011

Hallowe'en II: Boxing Day

Here's a scary/stupid trick to pull if you, like me, are a finalist for Biggest Email Pack Rat On The Internet.

Open Apple Mail's Preferences, select the "General" tab, and change Dock unread count from Inbox Only to All Mailboxes. I double-dare you.

Or, it might just be too depressing. I went from a mere(!) 355 unread messages to…

56,232

Let's see, at an average of just under 2 minutes each (clocked over a recent week), cleaning that out would take me some 78 days, 2 hours and 24 minutes, during which some 31,000 new emails (not including auto-filtered spam at ~85% of total incoming email) would arrive. Assuming I could stay awake that long. Caffeine is The Elixir of Life™, but… "filling a baby's eyedropper from a raging waterfall" does not even begin to do the image justice. Maybe the old Ragú ad slogan, "It's In There".

There has to be a better way.

Wednesday, 21 December 2011

Cover Yourself: Toolchains Are Agile, Too

As people who know me professionally and/or read my blog know well, I have been a (raucously) loud evangelist for test-first development (TDD, BDD, Scrum, whatever your flavour) for years now. If I write even an exploratory bit of code and don't have tests in place first, I get very uncomfortable. As complexity increases, without tests (preferably automated, repeatable tests), I argue that I simply can't know what's really going on, because I can't prove it.

A major corollary to this is test coverage reporting. If I can't see what's been tested and what hasn't, then in a very real sense nothing has been, since I can't document/prove what has been and what hasn't. And the better (more productive) teams I've worked in have established, and regularly hit, coverage testing better than 95%, with 100% being a common (and commonly attained) goal. (Edit: Note that this is for C0 and C1 test coverage; tools that cover C3 and C4 are rare to nonexistent in most languages, such as Ruby.)

As you may also know, I've been getting (back) into Ruby development, using Rails 3 on Ruby 1.9. Ruby's long-time de facto standard coverage tool for many years was rcov, which generally worked well. However, Rob Sanheim has stated that "RCov does not, and will not support C based Ruby 1.9.x implementations due to significant changes between 1.8 and 1.9.". He recommends either SimpleCov or CoverMe. Ripping out RCov and replacing it with CoverMeSimpleCov on a test project took all of five minutes and left me with attractive, functional, (so far apparently) quite accurate reports.

One of the basic principles of agile development is that the team must actively embrace constructive change as their project evolves. It's often easy for harried, hurried people to forget that that applies to their tools as much as it does to what they produce using those tools.

Just a thought as my evening winds down.

Tuesday, 20 December 2011

Well, that was fast.

Like many of you, I expect, I take a good look around at tools like editors when my needs change dramatically. A new system, or a language or app type I haven't worked with in a while, and I'll go out and see what the community is using (or at least buzzing about), narrow that down to a list of 2 or 3 to try, and start trying them out. Usually, I'll try one for a few days and then switch to the next one for a few days, until I've got lists of things I like and don't like, and make a decision. The last time I took a serious look at this was a couple of years ago, when I shelled out for Komodo IDE (which I still enthusiastically recommend to PHP developers in particular, by the way).

Recently, I've started working again intensely with Ruby, on a Mac instead of Linux as years earlier, and some quick research looking around mailing lists, group archives, and such, strongly suggested that TextMate was The Gold Standard.

But I'd never really done a head-to-head comparison, and I knew that BBEdit and RubyMine had passionate evangelists in the community.

So I downloaded the 30-day eval of RubyMine and fired it up.

I tried to open a Git project that I've worked with extensively in TextMate and from the Git command line. One immediate issue: copy-and-paste between the Mac pasteboard and the edit fields in the RubyMine dialog did not work. I then opened the project directory using RubyMine's "open a directory" feature; it found Git all right, but gave half a dozen red flags and refused to work further.

"What could cause such a reputed package to crap out so completely," I asked myself as I started browsing around inside the app directory. The answer became quickly obvious.

"Oh, it's a Java app." No native UI at all — though to be honest, you have to look closely at the UI widgets to tell.

It took less time to install the app, have it flame out, and uninstall it than it did to write this post.

The sooner we either a) rid the desktop world of Java or b) get the Java community to adopt usable desktop interoperability, the better. I'm years past the point where I care which option is selected, but one had better be.

Monday, 5 December 2011

YAHA! (Yet Another 'Hey, Apple…') Hey, SingTel, you, too.

Hey, Apple…

I like what works, and what helps me work better/faster/more enjoyably; the more of these boxes that get ticked, the better. After all, that's why I'm sitting in front of two iMacs with a MacBook Pro and iPad close to hand.

However…

The new iMacs are a wonder. A 27" display, 2560 x 1440 resolution; absolutely gorgeous. I can have two full-page views plus a Terminal open on the same screen. Two steps forward for usability.

Now for one step back. The mouse cursor appears to be the same size as on my 15" MacBook Pro, even though I'm now looking at nearly three times as many pixels and, more importantly, screen area. How about taking a page from somebody else's playbook (for a change) and have some key sequence that would do a "radar-style" visual locator for the mouse cursor?

The Middle, or The Muddle, or Both

Seriously, I'm in love all over again with this new system. And if I weren't in one of the last Soviet-class customer-last economies, I'd be able to make even better use of the new tool.

With new hardware and software come new opportunities for learning. Anyone who's ever learned or discovered or done something new, or even new to him- or herself, has experimented; has pushed boundaries. Sometimes, they push back the first few times. "You'll know the pioneers because they're the ones with all the arrows in their backs" might be an Americanism, but its meaning is perfectly clear to anyone who's ever challenged Donald Rumsfeld's "unknown unknowns"; it's the things you don't know that you don't know that offer the greatest opportunities for learning — if you survive them.

Apple have always had a symbiotic relationship with leading-edge technical norms. The latest example, in OS X Lion, is that for the very first time since Macs have shipped back in 1984, no operating-system media (floppies, CDs, DVDs) come with the system. If you want to nuke from orbit and start over fresh, you use something called Lion Recovery. It's a sweet, eminently sensible idea: most Mac customers have access to fast enough Internet connections that downloading most of what you need to reinstall is less of a hassle than rooting around trying to find some discs that you just know you put in a drawer. Somewhere. Maybe it was in your home office. Maybe in the away office. Maybe they're in one of those bulging boxes marked "Computer Stuff" up in the attic. It could easily take you longer to find physical media than to download four gig or so of bits on a reasonably-modern, properly-managed and -provisioned connection.

Hey, SingTel…

And then there's Singapore.

Advertising here is (in)famous for always having the language "Terms and Conditions Apply", which in practice seems to mean "We don't have to do a single thing we said we would once you give us the money if we can think of a reason not to, or if we do deign to provide the promised product or service, we expressly reserve the right to make the experience as unpleasant as our most innovative Government Scholars™ can conjure up".

Case in point: said Lion Recovery. The way it appears to work is that when your Mac phones home to Cupertino, it then fires up a for-purpose Web server that listens on the usual ports for authenticated incoming connections (from Apple). Cupertino or its CDN upload the data to your Mac, which uses it to reinitialise your installed system. This only works, of course, if your Mac is connected to a network that allows you to do such things, without being blocked by either policy or incompetence. Ah, those terms and conditions.

The Singaporean Interwebs are full of woe documenting Apple customers' fate as victims of SingTel policy and/or incompetence, with Lion Recovery being the latest poster child for the meme. SingTel and StarHub, two of the three major telecoms companies here (and the two popularly thought of as more closely tied to the PAP Government) have intermittent-or-worse problems, while the third (M1) is apparently less hassle. People speak of tethering their Macs to their M1 phones and spending over five hours (in at least one case) successfully performing Lion Recovery. This would be greatly preferable to not being able to recover at all. Unfortunately, the M1 signal in the HDB tenement-with-pretension that I'm living in is too weak and unreliable to support that.

So I'm back to waiting, impatiently, for SingTel to get their cranial appendage out of whatever orifice it's presently stuck in, and work with Apple on fixing the problem. Apparently Apple have run against the proverbial brick wall on multiple occasions to try to help solve a problem over which they have zero control.

Hey, SingTel…

Thursday, 24 November 2011

ANFSD: Fascism Bites Everyone In The Pocket (Among Other Places)

Fascism should more properly be called corporatism, for it represents the fusion of State and corporate power.

— B. Mussolini

If you've lived in the USSR, former Soviet republics, or much of south Asia (including Singapore), you're quite familiar with the concept of "exclusive distributors", which the Free World seems to have thrown on the ash-heap of history at roughly the same time as leaded petrol. For those who've never had the displeasure, it works just like it sounds: one appointed business entity is the only way for subjects of a particular country to lawfully access the products or services of a particular (foreign) company. In places like Singapore which follow a State-capitalism model, that exclusive agent customarily has strong ties to either the Government or to entities acting as agents of the Government (sovereign-wealth funds, public officials who also run nominally-private-sector companies, and so on). This rarely, if ever, is a boon for the consumer.

Case in point: today, I wanted to buy a good fan, a Vornado Compact 530, comparable to the Vornado fans I owned when in the States.

Naturally, I can't buy a fan from the Vornado Website, though it gives me plenty of information about the thing. A little search engine-fu tells me that Home-Fix are the exclusive distributor for Vornado in Singapore.

Naturally, I can't order anything, or even get any product information, from Home-Fix's site. I could, however, get the phone number(!) for their nearest store, and called to enquire about availability and pricing.

The conversation went something like this:

Me: Can you tell me if you have Vornado fans?
Clerk: Yes, we do. Which one are you looking for?
Me: The "Compact 630".
Clerk: Yes, we have that one, in white and black.
Me: How much is it?
Clerk: S$170.
Me: $170?!? So your rate on the US dollar is about three Singapore dollars, then? The list price on the Vornado Website is US $49.99!
Clerk: (as to a very small, very slow child) Oh, that's the online price. It's bound to be cheaper.
Me: Well, I've done some checking on various US, European and Australian stores; the walk-in retail price is right about the same.
Clerk: Well, our price is S$170.
Me: Well, thank you for your time.

Not to be terribly unfair to either Home-Fix or the clerk; that's the way the system operates here, and any company that didn't jack their prices up to whatever the marks will pay isn't doing things The Singapore Way. It's not as though it's actually people's own money; they're just holding it until those who pull the levers of State decide they want it back2.

So, ragging at Home-Fix or any of the many, many other businesses whose prices have no apparent correlation to corresponding prices elsewhere won't accomplish anything. If, as was let slip during the Singapore Presidential "election" this year, the Government's sovereign-wealth funds and people connected to High Places really do control 2/3 or more of the domestic Singapore economy, then complaining about any individual companies is rather like the man who's been attacked by a chainsaw-wielding madman who then worries about all the blood on his shirt. Fix the real problems, and the small ones will right themselves.

Incidentally, this also illustrates why I support the Occupy movement. If Americans want to see what another decade or two of economic and political polarisation between the top 400 families and the rest of the country will look like, Singapore is a good first-order approximation. And some sixty percent of Singaporeans apparently either couldn't be bothered to think differently, or were too afraid to.

Sigh. I still need to find a reliable, efficient fan.

Footnotes:

1. According to the current XE.com conversion, 170 Singapore dollars (S$170) is approximately US$130.19,, or some 260 percent of the list price. In a free society, that would be called "gouging"; here in Singapore it's called "buying a foreign product". Remember, no foreign companies really operate here. When you walk into a McDonald's or Starbucks or Citibank in Singapore, you're walking into a local, almost invariably Government-linked, franchise or local representative. The quality and experience difference can be, charitably, stark. (Return)

2. Known in the US as the "thug defense", after a Los Angeles mugger who used that line of "reasoning" as the basis for an attempted pro se defence in court. He was, of course, unsuccessful. (Return)

Know when to walk away; know when to run

This started out as a reply to a comment on the LinkedPHPers group on LinkedIn; once I started writing, of course, it quickly grew beyond what was appropriate as a conversationally-inline comment. So I brought it over here. It's something I've been thinking about for a couple of weeks now, so let me get this anvil off my chest.


To R Matthew Songer1: I'd advise adding Ruby on Rails to that list of alternatives. I've been writing PHP for fun and profit since PHP 4 was still a future hope wrapped in hyperbole. Now that we've finally got a (mostly-)decent language in 5.3 for some serious software development, I'm burning out. Part of that burnout is due to what I see in the PHP community, part to geography, and part to other factors that approach "fit for use".

I've recently taken a post as Chief Engineer at a little startup nobody's heard of yet. Our prototype was done in PHP; it got the idea across well enough for our initial investors and customers to bang on our door. So the CEO and I thought, great, we'll find another senior2 PHP guy, write a real app (the prototype doesn't even have testable seams), and we're off.

If I were looking to hire a battalion of deputy junior assistant coders whose main qualification as PHP devs was being able to spell it whilst taking endless trips down the waterfall, I could do that. I was aiming higher: I wanted someone who knew his way around current best practices; who realises that being experienced is no excuse to stop learning aggressively; who understands how to build web apps that can evolve and scale; who looks for the best feasible way to do something instead of just the first one that pops into mind. I especially needed to find someone who was as BDD/TDD-infected as I am, and recognised the value of building tools and automation early (but incrementally) so that we'd be a lean, agile development machine when the rubber really needed to hit the road, a (very) few short weeks from now.

In the States, or in much of Europe, or basically anyplace outside Singapore, I'd probably have a decent chance of finding such a person. Here, not so much. I was especially concerned by the way PHP usage seems to be devolving in these parts. There are lots of folks who think that it's a good idea to reinvent Every. Single. Wheel. themselves, without really knowing all that much about what has gone before. Frameworks are a good example; if you aren't intensely aware of how good software, sites and Web apps get built; if you don't even bother with encapsulation or MVC or SOLID or on and on, how can you expect anybody who does know his Craft from a hole in the ground to take you or your "tool" seriously? It might wow the management rubes who proudly admit they don't know squat — but in the world as it is, or at least as it's going to exist in the very near future, those will (continue to) be a vanishing breed even here. Even in a Second World Potemkin village of a city-state where what you are and who you know is far more important in most situations than what you've done and what you know, you're still running a risk that somebody is going to come along who actually gets up in the morning and applies herself or himself to writing better stuff than they wrote yesterday. And, eventually, they're going to wipe the floor with you — either you as an individual or you as a society that remains stubbornly top-down-at-any-cost.3

In contrast, my experience talking with Ruby and Rails people here, even people with absolute minimal experience in Rails, is chalk-and-Friday different. For one illustrative example: one book that more than half the yes-I've-done-a-bit-of-Rails folk I've talked to here is Russ Olsen's Eloquent Ruby (ISBN 978-0-321-58410-6). Your average PHP dev — or C++ dev or Java dev — is fighting the dragons (schedule, complexity, communication, etc.) far too much to get out of fire-drill-coding mode and into writing. If you believe, as I do, that all truly competent software is written as a basis for and means of conversation between humans, and incidentally to be executed by a computer, then you know what I'm driving at here. Knuth's dictum that programming is a creative, literary act is too easily lost when you're just throwing yourself against the wall every day trying to see which of you will break first. (It's very rarely the wall.)

If you write software that can be read and understood, and intelligently expanded on or borrowed from, and your whole view of the omniverse changes. Your stress goes down, your written works' quality goes up, and you enjoy what you do a lot more. Automating away the repeated, detailed work and having a sensible process so you don't give yourself enough rope to shoot yourself in the foot regularly (with apologies to Alan Holub) is the most reliable way yet found to get your project schedule under your control instead of vice versa.

All this does however have one pre-supposition, which I have received numerous complaints on here locally: you and your team must be fully literate and fluent in a shared human language4. The vast majority of software to date seems to be associated with four such: Business Standard English, Russian, Japanese and Chinese. If your team shares a different language at a (near-)native level, with one or more of you having similar skills in the earlier four, you should be able to make do for yourself rather nicely. Having others build on your work, if desired, is going to be a bit more problematic. Poor communication has been at least a strong contributing cause, if not the root cause, of every failed project I have seen in my career. If you can't speak the customer's language effectively enough, with nuance, subtlety and clarity as needed, then your chances of project success are somewhat worse than my chances of winning the next six consecutive Toto jackpots — and I don't plan on buying any tickets.

Footnotes:

1. "I am sitting here tossing the options around for a business application, browser based, and trying to decide...PHP, Java, Python, .NET?" (Return)

2. Why insist on a senior guy when they're so rare here (at least the PHP variety)? Because I figure that with a senior dev coming on a month from now, we'd spend roughly half the time between now and our first drop-deadline writing magnificent code, and the other half building infrastructure and process to make building, testing and deploying the magnificent code we write straightforward enough not to distract the team from the creative acts of development. There's not enough time to wipe anybody's backside. (Return)

3. The phrase "at any cost" always reminds me of a company I contracted to back in the late '80s. My boss there had a (large) sign above his desk, "We will pay any price to cut costs." The company eventually paid the ultimate price — bankruptcy. (Return)

4. I'm describing what used to be called "college- or university-level language skills", but as any university teach will readily tell you, the skills exhibited by students have dropped precipitously and measurably in the last three decades or so. (Return)

Thursday, 10 November 2011

ANFSD: GnuPG Semi-Pro Tip

After you use GNU Privacy Guard, or really any public-key encryption system, for a while, you'll probably have it set up for more than one of your email addresses. It's tempting to have the same pass-phrase for all your IDs.

Don't.

For instance, I use long-ish pass-phrases that are similar enough to remember easily but different enough that a dictionary attack is highly unlikely to be successful. That also protects me from doing something silly/confusing/potentially dangerous like thinking I'm sending from one email address when my email package actually defaults to another. Since the pass-phrases differ, you can't sign a message sent with Account B with the phrase from Account A (that you thought you were using but you were in too much of a hurry to pay attention to the 'From' line).

DRY May Not Be Wet, But It Sure Is Cool

As any developer who values his time, sanity, or amicable relations with teammates who have to maintain his code knows, one of the cardinal rules of modern programming is "Don't Repeat Yourself", or DRY. This is obviously something that should make reading code easier. Obviously, therefore, it should be applied except where the lengths you go to to not repeat yourself make your code harder for someone else to read. The idioms of some languages help this more than those of others.

I was recently browsing through a (rather poor) Ruby programming book when this hit me between the eyes. Consider this ERB example from the book.

Pretty ugly, yes? Part of that ugliness is intrinsic to ERB; which is why people are leaving it in droves.

Compare this exact translation into Haml:

= form_for @product do |f|
  - if @product.errors.any?
    .error_explanation
      %h2= "#{pluralize(@product.errors.count, "error")} prohibited this product from being saved:"

      %ul
        - @product.errors.full_messages.each do |msg|
          %li= msg

  .field
    = f.label :title
    %br/
    = f.text_field :title

  .field
    = f.label :description
    %br/
    = f.text_area :description

  .field
    = f.label :image_url
    %br/
    = f.text_field :image_url

  .field
    = f.label :price
    %br/
    = f.text_field :price

  .actions
    = f.submit

A bit easier to understand, now that we've gotten most of the clutter out of the way, yes? Now, the repeating field definitions stick out like a sore thumb; they're almost the same, but not exactly. We've got three text_fields and one text_area, so a simple loop that just plugs in values won't quite cut it, will it? How about this:

= form_for @product do |f|
  - if @product.errors.any?
    .error_explanation
      %h2= "#{pluralize(@product.errors.count, "error")} prohibited this product from being saved:"

      %ul
        - @product.errors.full_messages.each do |msg|
          %li= msg

  - [:title, :description, :image_url, :price] do |name|
    .field
      = eval("f.label :#{name}")
      %br/
      = eval("f.text_field :#{name}") unless name == :description
      = eval("f.text_area  :#{name}") if     name == :description

  .actions
    = f.submit

If you're coming to Ruby from a PHP background, you've been conditioned not to use eval; there's all sorts of nastiness that could lurk there, especially when code is sloppy. (And PHP code is notorious for its sloppiness.) But in Ruby and Haml, this lets us solve a problem more eloquently than the two or three other possibilities that come to mind. Further, the idiom of the repeated-but-not-repeated text_field/text_area fragment

      = eval("f.text_field :#{name}") unless name == :description
      = eval("f.text_area  :#{name}") if     name == :description
should make it clear to even the most hurried reader that we're dealing with a special case. This is one of those situations where a helper method would be nuclear overkill. (If we were scattering dozens of these all-but-one-fields-is-the-same forms through an app, I'd revisit that stance.)


OK, but why blog about this in the first place? Well, the company I'm working with now is looking for a senior Ruby/Rails dev. Most of the CVs we're getting are from mid-level guys (if you're a Rails doyenne (i.e., female), please email!) who have done some Ruby and one or two other languages, generally PHP and Java. What I've learned, on both sides of the table, is that if you've got some experience in several languages that aren't all closely-related to each other, it's a lot easier for you to ramp up on any language you need. Any language will eventually go out of favour. Being able to code idiomatically in anything you need to, won't.

Tuesday, 8 November 2011

Eloquence Is "Obsolete". We're Hurting. That's Redundant.

Code is meant to be read, understood, maintained and reused by humans, and incidentally to be executed by a computer. Doing the second correctly is far, far less difficult than doing the first well. Innovation is utterly meaningless without effective communication, and that is at least as true within a team as between it and a larger organisation, or between a company and its (current and potential) customers.

The degree to which a class framework, or any other tool, helps make communication more effective with less effort and error should be the main determinant of its success. It isn't, for at least two reasons. One, of course, is marketing; we as a society have been conditioned not to contest the assertion that the better-marketed product is in fact superior. In so doing, we abdicate a large degree of our affirmative participation in the evolution of, and the control over society at the small (team/company), mid-level (industry) and wider levels. We, as individuals or organisations, devolve from customers (participants in a conversation, known as a 'market', in which we have choices) into consumers (gullets whose purpose is to gulp endless products and crap cash).

More worrying is that effective, literate communication has gone completely out of fashion. Whether or not you blame that on the systematic laying waste of the educational system over the last thirty years, it's increasingly nonsensical to argue with the effect. People are less able to build understanding and consensus because they do not have the language skills to communicate effectively, and have been conditioned not to view that as a critical problem urgently requiring remediation. Oh, you'll hear politicians bloviating about how "the workforce needs to improve" or "education most be 'reformed' for the new era", but that's too often a device used to mollify public opinion, make it appear as though the politicians are Doing Something effective, and especially to preempt any truly effective public discussion leading to consensus that might effect real socioeconomic improvement rather than the "Hope and Change"™ genuine imitation snake oil that's been peddled for far too long.

Back on the subject of developers and tools, I would thus argue that what tools you use are a secondary concern; if you don't understand code that's been written, by others or (especially) by you, then a) that code can't be trusted to do anything in particular because b) someone didn't do their job.

Your job, as a developer, is to communicate your intent and understanding of the solution to a specifically-defined problem in such a way that the solution, and the problem, can be readily undestood, used, and built upon by any competent, literate individual or team following you. (Again, explicitly including yourself; how many times have you picked up code from a year or a decade before, that you have some degree of pride in, only to be horrified at how opaque, convoluted or incomprehensible it is?) Some computer languages make that effective communication easier and more reliable than others; some choose to limit their broad generality to focus on addressing a narrower range of applications more effectively and expressively.

That has, of course, now progressed to the logical extreme of domain-specific languages. General-purpose languages such as C, Ruby or Python try to be everything but the kitchen sink, and too often succeed; this makes accomplishing any specific task effectively and eloquently incrementally more difficult. DSLs are definitions of how a particular type of problem (or even an individual problem, singular) can be conceptualised and implemented; a program written in a proper DSL should concisely, eloquently and provably solve the problem for which it was written. This has sparked something of a continuing revolution in the craft and industry of software development; general-purpose languages arose, in part, because writing languages that are both precise enough to be understood by computers and expressive enough to be used by humans is hard; it still is. DSLs take advantage of the greatly-evolved capabilities of tools which exist to create other tools, compared with their predecessors of just a few years ago.

But the language you use to develop a program is completely irrelevant if you can't communicate with other people about your program and the ecosystem surrounding and supporting it. If half the industry reads and writes on a fifth-grade level, then we're literally unable to improve as we should.

To paraphrase the television-show title, it doesn't matter if we're smarter than a fifth-grader if that's the level at which we communicate. Improving that requires urgent, sustained and concerted attention — not only to make us better software developers, but to make the larger world in which we live a better place. Let's start by at least being able to communicate and discuss what "better" means. That in itself would be an epochal improvement, saving entire societies from becoming obsolete.

Sunday, 30 October 2011

Once more into the breach, dear colleagues, once more…

…or How I Learned to Stop Worrying and Love Serving Aboard Kobayashi Maru; a history lesson.

Once again, I've had an interesting couple of months. Between modern Singapore's regular effect on my health, some insane work and the opportunity to get even more insane work if my two best references weren't indefinitely unavailable (but hey, Thailand is usually lovely this time of year…or anytime, actually).

Ahem.

I am rediscovering a love for developing in Ruby after losing touch with it some ten years ago. In the Ruby 1.5 days, things like class variables and the hook system were either new and shiny, or had finally been thrashed into something both usable and beautiful. If programming was what you did to earn a living, then programming in Ruby was something that you thanked $DEITY for every day because, after all, how many people in this world get to use beautiful tools that make you measurably better at your craft while blowing your mind on a regular basis, and get paid for the privilege?

The Fall

Then Ruby on Rails came along, with the clueless-business-media hype and cult of personality built around G-d Himself, or at least the three-initial version of same as anointed by said media, and Things Changed:

New acquaintance: So what do you do for a living?

Me: I write computer software, mostly using Ruby, or when I have to, Delphi or C++.

N.A.: Oh, you're a Ruby on Rails programmer.

So I left Ruby behind. Delphi lasted for a while, until its competitor's lock on the default system combined with some spectacularly bad corporate timing to relegate it to an "oh yeah, I heard of that once" niche. And then came this gawky new kid called PHP.

PHP wasn't exactly "new" when I first got into it; version 3.x had been out for some time, and the in-beta version 4 had class-based object-oriented programming. It was more primitive than an 1898 Duryea automobile to a Porsche driver, but you could see that the basic principles were at least in sight. And so, as the dot-com bubble was starting to really inflate, I jumped into PHP with a vengeance. It wasn't a product of one of the existing tech-industry corporate titans and it wasn't tied to a single operating system or Web server (though it did seem to work best with the then-early Linux OS and Apache server).

The great thing about PHP is that just about anybody can poke at it for a while, and at the end have something that (seems to) work. The pathologically execrable thing about PHP is that the barrier to entry is microscopically low, lower even than for Visual Basic 6 back in the day. And so, the inevitable result was (and is) that you have literally hundreds of thousands, if not millions, who get jobs by saying "Yeah, I know PHP; I've done (this little site) and (that steaming mess with a lot of Flashy bling on it)." In reality, there are thousands, at most, of good PHP developers out there, with a few more thousands actively working at improving their craft.

Purgatory

So PHP 4 had a crappy object model that anybody could poke at or ignore at will. PHP 5, from mid-2004, started to get its head on straight with respect to both OOP and what it really took to do PHP well, but lots of damage had already been done. Innumerable client projects had either failed or become incredible maintenance/performance nightmares due to the shoddy code that PHP, and the community that grew up around it, encouraged mediocre/inexperienced/inattentive developers to write.

And then, in mid-2009, PHP 5.3 came into the world, and it was a glorious golden statue with legs of brown, wet mud. Many of us who wanted to see PHP evolve into a "properly" object-oriented language, along the path that it had been following, found much to rejoice in. Support for closures. Better internationalisation support. Far better garbage collection. A rearrangement and winnowing of the extension and application repository system that was the closest thing PHP has to Python's eggs or Ruby's gems.

PHP 5.3 also introduced what had to have been the most-requested new feature for years: namespaces. Namespaces are one solution to the problem of allowing the development team to organise collections and hierarchies of classes to both make them easier to work with conceptually, and of mitigating possible naming conflicts (class Foo in namespace Bar is "obviously" distinct from class Foo in namespace Barney.)

However, this is also where the legs of the "golden statue" were transformed into wet mud: the way in which the PHP namespace features work is so semantically and visually jarring, with so many inconsistencies visible to both the experienced pre-5.3 PHP developer and the experienced developer of object-oriented software in other languages, that it quickly became a laughingstock and a millstone. A necessary millstone, but one of which many writers of both code and prose waxed eloquent in their righteous, intricately-justified derision.

A New Hope

And this eventually served as a wake-up call to a number of those who view PHP as just another tool in their toolkit, as opposed to either a cash cow to be milked or a semi-religious icon to be polished and cared for in the precise fashion that the High Priests of Zend decree, or, those who simply never learned enough to care. Seven to ten years is a long time to spend in any one language for an experienced developer, and quite a number of highly-visible PHP community stalwarts have been publicly participating in and contributing to other communities: various JVM languages like Groovy, Scala and Clojure; Objective-C; C#; and Ruby.

A few short months ago, I had urgent need to re-immerse myself in Ruby, learn modern Rails, and make myself ready in all respects to participate in Ruby on Rails projects at a senior or leading level. This gave rise to a series of fortunate events, to paraphrase Lemony Snicket. I discovered that Ruby 1.9 is now a very advanced, mature language with solid experience-based best practices that are sensible and self-consistent. I discovered that Rails 3.1 is an incredibly productive way to get a Web site or application up and running, and that the ways in which it encourages you to code and think do not have the same propensity to inflict mortal wounds as, say, the overly-trusting PHP journeyman developer. Rails itself has grown to be a much larger community that no longer revolves around a single individual, or even a small group of individuals, as that it did seven years ago or as too many other languages do now. And, importantly, many of the things that make Rails great would not be practical, or even possible, were it built on and in any other language than Ruby

Above all, Ruby gives hope to those few, mindful of Knuth's saying that "programming is a literary act", understand that computer programs are written to be read and modified by humans, with computer execution almost a secondary concern over the life of a project. If you care about thinking creatively, if you enjoy having your mind regularly blown in ways that challenge you to actively and continuously improve your mastery of the craft of software development, you are going to love Ruby, and Rails.

An Evangelist Repurposed

And so, to any who are pondering a new Web project, who see how overwhelming the mindshare of PHP is and how obviously negligent (even to non-technical eyes) is far too much PHP code and, by extension, those who made such code available, I would ask you to strongly consider Ruby, and Rails, even if your team has little or no experience in them.

After all, quality and intelligence should, in any world worth living in, be major competitive advantages — or, rather, by rights, their lack must needs be mortal.