Call to the Lazyweb: Backup

I have a problem I’ve been beating my head against for a while now, and I’ve finally given up and decided to put this out there to the hive-mind of the Internet.

I have a laptop I want to keep regularly backed up. I have external hard drives that I use to do this, one that I carry with me and one that stays in my office in Portland. I use cloning software to duplicate the contents of the laptop onto them.

But I also want to do incremental backups, Dropbox-style, to a server I own.

I do have a paid Dropbox account and I do use it. (I also have a paid Microsoft OneDrive account.) But I’d really prefer to keep my files on my own server. What I want is very simple: the file and directory structure on the laptop to be mirrored automatically on my server, like such:

This should not be difficult. There is software that should be able to do this.

What I have tried:

Owncloud. They no longer support Mac OS X. Apparently they ran into problems supporting Unicode filenames and never solved it, so their solution was to drop OS X support.

BitTorrent Sync. This program is laughably bad. It works fine, if you’re only syncing a handful of files. I want to protect about 216,000 files, totaling a bit over 23 GB in size. BT Sync is strictly amateur-hour; it chokes at about 100,000 files and sits there indexing forever. I’ve looked at the BT Sync forums; they’re filled with people who have the same complaint. It’s not ready for prime time.

Crashplan. Crashplan encrypts all files and stores them in a proprietary format; it does not replicate the file and folder structure of the client on the server. I’m using it now but I don’t like that.

rsync. It’s slow and has a lot of problems with hundreds of thousands of files. The server is also on a dynamic IP address, and rsync has no way to resolve the address of the server when it changes.

Time Machine Server. Like CrashPlan, it keeps data in a proprietary format; it doesn’t simply replicate the existing file/folder structure, which is all I want. Like rsync, it has no way to cope with changes to the server’s IP address.

So you tell me, O Internets. What am I missing? What exists out there that will do what I want?

Some thoughts on social issues in video games

Unless you’ve spent the last year living entirely under a rock, far from the hustle and bustle of normal life, and entirely without any sort of Internet connection, you’re probably aware to some extent of a rather lengthy fuss about the heart and soul of computer gaming. This fuss, spearheaded by a diverse group of people loosely gathered under a name whose initials are similar to GargleGoose, is concerned about the future of comic book and video game entertainment. They believe that a sinister, shadowy cabal of “social justice warriors”–folks who are on a mission to, you know, right wrongs and uplift the oppressed, kind of the way Batman or Superman do only without the fabulous threads. This cabal, they fear, is coming for their video games. The social justice warriors, if we are to believe GameteGoose, are so obsessed with political correctness that they wish to make every game in the world a sanitized, sterile sandbox where not the slightest whisper of sex or violence may be seen.

Okay, so granted that’s not likely the characterization GrizzleGoose would put to their aims, though I think the general gist is there.

And they’re not entirely wrong, though they’re pretty far from right. There is a battle going on for the heart and soul of entertainment. For decades, comic books and video games have catered to straight white middle-class guys, who overwhelmingly make up the demographic that bought the games, read the comics, and to whom writers, artists, and developers catered with laser focus.

But times have changed, comics and games have gone mainstream, and they’re attracting more and more people who aren’t straight white dudes any more. And as other folks have come into the scene, they have started pointing out that some of the tropes that’ve long been taken for granted in these media are, well, a little problematic.

And merely by pointing that out, the folks talking about these problematic things have provoked pushback. When you live in a world where everyone caters to your exact tastes, the idea that some people might start making some things that aren’t to your liking feels like a betrayal. And the suggestion that there might be something about your taste that isn’t quite right? Well, that can quickly turn into an existential threat.

GooeyGoose has effectively capitalized on that existential threat, rallying straight white dudes into believing they’re the Rebel Alliance who are under attach from the forces of social justice while adroitly handwaving away the reality that when it comes to popular taste in entertainment media, straight white middle-class dudes are and have always been the hegemonizing Empire.

But here’s the thing. You can point out that popular entertainment media is problematic without saying the people who like it are bad people.


I play Skyrim.

Skyrim is an open-world role-playing game where the player takes on the persona of a mythic hero trying to save a world plagued by dragons, a civil war, and the restless undead. It’s almost entirely unstructured, with players having the ability to choose to do just about Anything. Non-player characters the player interacts with offer advice and provide quests, which the player can choose whether or not to do.

It’s a lot of fun to play. I’ve lost quite a number of hours of my life to it, fighting dragons, deciding which side of the civil war to support, participating in political intrigue, exploring creepy dungeons, and exploring a lush and richly detailed world.

It also has some problematic issues.

This is Haelga, one of the characters in the game. The player can be given a minor side quest in the game by her niece, who works for Haelga but doesn’t like her very much. Haelga’s niece, Svana Far-Shield, tells the player that Haelga is having sex with several different men, and wants the player to get proof in order to shame and humiliate Haelga.

The way the quest is written, it’s sex-negative as hell. It plays to just about every derogatory trope out there: open female sexuality is shameful, women who are perceived as sexual are “sluts,” and pouncing on a woman with evidence of her sexual attitude is a sure way to humiliate (and therefore control) her.

You might argue that Skyrim is set in a time that is not as enlightened as the modern-day West, but that ignores a very important reality: Skyrim is set in a time and place that never existed. There’s no compelling reason to write sex-negativity into the script. The game works well without it. It’s there not because the distant faux-medieval past was sex-negative, but because modern-day America is.

But that, too, misses a point, and it misses the same point the GiggleGoose folks miss:

It is possible to recognize problematic elements of a game and still enjoy the game.

I recognize that this quest in Skyrim is sex-negative, and that’s a problem. I still like the game.

The people who play these games and read these comic books are not bad people for doing so. The content of the games and comics is troubling to anyone who cares about people other than straight white middle-class men, sure, and it’s certainly reasonable to point these things out when they occur (though they happen so damn often that one could easily make a full-time career of pointing them out). That doesn’t make the people who like them Bad And Wrong simply because they enjoy them.

GiddyGoose believes that saying video games are a problem is the same thing as saying people who enjoy video games are a problem. And if you identify with comic books and video games so strongly that you can not separate your entertainment media from your sense of self, they might be on to something.

But most folks, I think, are able to take a deep breath, step back a half pace, and recognize that the writers and developers have done some really cool, fun stuff, but they can still do better. It would not kill anyone if the quest in Skyrim were rewritten (how about have Haelga’s character replaced by a man? There’s a thought…), or even dropped entirely. Nobody suffers from recognizing that it’s not cool to make fun of people who aren’t like you.

Nobody’s saying that Skyrim shouldn’t exist, or that people who play it are terrible people. I would like to think, on my optimistic days, that that’s an idea anyone smart enough to work a computer can recognize.

eBook Design Illuminated

A short time ago, I was hired by Talk Science To Me to do the eBook version of Tantra Illuminated, a very lengthy academic work on the history of Tantrick religious traditions in India.

The book was large and beautifully designed, with a great deal of content from original Sanskrit sources. The design used a number of different, complex elements, including copious margin notes.

I’m in the process of blogging about the complexities of eBook design with non-English alphabets and complex layouts. Part 1 of the series is up on the Talk Science To Me blog. Here’s a teaser:

The project turned out to be far more daunting than I’d imagined, even knowing from the outset that it would likely be more complex than it first appeared. I could easily write a book on the various technical, layout and rendering challenges I encountered creating this e-book (in fact, that might be a good future project!), but we’ll just look at a few of the interesting potholes we encountered on the road to creating the e-book.

A tale of two diacritics

The text in Tantra Illuminated contains significant lengths of transliterated Sanskrit. Sanskrit uses a non-Latin alphabet for which a standard transliteration system called the International Alphabet of Sanskrit Transliteration (IAST) exists. This is the system employed by the transliterations in Tantra Illuminated.
The IAST relies heavily on Latin characters with diacritic marks. Most of these marks are supported by the majority of e-book readers, so I didn’t anticipate difficulty with the transliterations.

I was wrong.

You can comment here or over there.

Nome, Alaska: Ruins of the White Alice facility

There’s a mountain overlooking Nome. It’s called Anvil Mountain, and on that mountain is a kind of monument to the Cold War. You can see it from just about anywhere in town. These four enormous antennas squat over the landscape, a silent testament to the money and lives squandered on endless political bickering.

When I saw them, I had to check them out.

These four antennas are part of the old “White Alice” system, a communication system that was part of the old Distant Early Warning radar installation all along Alaska, constantly searching the sky for signs of Russian bombers sneaking over the Arctic and heading across Canada toward the United States.

The system was designed in the 1950s, when fear of the Commies was really starting to gain traction. The Distant Early Warning line was a set of remote high-powered radar facilities all along Alaska, but the designers had a problem. Alaska is huge. If you count the string of islands that extends from its western edge, many of which were home to DEW radar, Alaska is about the same distance stem to stern as the distance from California to New York.

And there are no roads, no telephone lines, and no power lines. Even today, there is no way to get to Nome by road; roads linking it to the rest of Alaska simply do not exist. You get in and out by air or barge, and that’s it.

The radar stations along the DEW line needed to be able to talk to command and control centers. Normal radio wouldn’t work; Alaska is so large that the curve of the earth renders line-of-sight radio unworkable.

So the Air Force came up with an idea: troposphere scattering. Basically, they decided to use enormous antennas pointed at the horizon to blast an immensely powerful radio signal, so strong it would bounce and scatter from the upper layers of the atmosphere, reaching stations beyond the curve of the earth.

The system was code-named “White Alice” and was built at enormous cost in the 1950s and operated through the 1970s, when satellite communication made it obsolete. By the time it was decommissioned, there were 71 of these stations, including the one on Anvil Mountain.

I borrowed a 4×4 and drove up the mountain. The facility is surrounded by a chain-link fence that has long since been pulled down and yanked apart in places. An ancient, battered sign warns trespassers that it’s a restricted area; the locals seem to use it for target practice.

The White Alice installations were powered by enormous diesel generators. Each of the four antennas at a facility consumed up to 10 KW of power; the generators provided power for the transmitters, the living quarters, and small line-of-site microwave dishes that provided short-range communication.

Most of the White Alice facilities have been completely dismantled. Several of them are toxic waste sites, as diesel fuel and other contaminants have been dumped all over the place.

When the Anvil Mountain White Alice facility was decommissioned, the residents of Nome asked the Corps of Engineers to leave the four big antennas. Everything else is gone.

These antennas are huge–about five stories tall.

Cost overruns, under-engineered specifications, and overly optimistic maintenance projections made the White Alice project run ten times over budget. Most of the materials to build the installations–hundreds of tons of equipment for each one–were shipped to remote mountain peaks by dogsled. Airbases were constructed at many of the sites to get fuel, people, and supplies in and out. Technicians worked at these sites year round, facing minus 30 degree weather or worse during the winter.

We went up twice, once during the afternoon and once at 1:30 in the morning to watch the simultaneous sunrise and sunset. I can only imagine how miserable it must have been to work here; in the middle of one of the warmest summers on record, when Nome was facing over-70-degree weather, it was cold and windy on top of the mountain. Winter, when the sun hardly comes up, must have been brutal.

I used my smartphone to take a panorama showing the whole installation from the very peak of Anvil Mountain. Click to embiggen!

An Open Letter to Brogrammers

Computer programming is a tough job. It’s not for the faint of heart or the fair of sex. It’s grueling, high-stress work, demanding that you sit on a comfortable chair in an air-conditioned office for hours on end, typing on a keyboard while looking at a monitor. Women just aren’t rugged enough for that.

Plus, as everyone knows, women can’t code. At best, they can maybe contribute in their small way to large open-source projects, but really, they’re much better suited for accessorizing PowerPoint presentations written by real coders. Manly coders.

If this is the world you live in, bro, I’m afraid I have some really bad news for you.

I’d like to introduce you to someone. This is Augusta Ada King, Countess of Lovelace. She was a lady’s lady, an aristocrat who lived in the 1800s and who did all of the things young women of noble birth did back then–danced, wrote poetry, and penned long flowery letters to her tutor.

She also wrote the world’s first computer program in 1842, in the margins of a technical document she was translating from Italian into English.

Yes, you read that right. Ada was so fucking baller she wrote code before computers had even been invented. You think you’re hardcore because you can use agile development strategies to link a big data repository to a high-performance querying front end without SQL? Pfaff. This woman invented coding before there was anything to code on.

And then there’s this woman, who could kick your ass sideways, steal your lunch, and then fart out code better than anything you’ll ever be capable of if you live to be a thousand years old.

This is “Amazing” Grace Hopper. She took leave from Vassar to join the Navy, where she invented or helped invent the entirety of all modern computer science, including nearly every wimpy-ass tool your wimpy ass laughingly refers to as “coding.” Compared to her, you’re nothing but a little kid playing with Tinker toys. Tinker toys she invented, by the way.

Yeah, I know, I know. You think you’re all badass and shit because you can get your hands right down there and compile a custom Linux kernel with your own task scheduler that reduces overhead for context changes by 16%, and…

Ha, ha, ha, ha, you are just so cute! It’s absolutely precious how you think that’s hardcore. That kind of shit is duck soup. Seriously, no-brains-required duck fucking soup compared to what she did. That C compiler you love so much? Grace Hopper invented the whole idea of writing code in a language that isn’t machine code and then compiling it to something that is. She was the one who came up with the notion of a “compiler” (and wrote the very first one ever), pausing along the way to invent code testing and profiling.

Thanks to her, you’re living in the lap of luxury. you can write code without having to know the exact DRAM timing. You have conditional branches and loops–neither of which existed when she started programming the Harvard Mark 1. (She made loops by taking long strips of paper tape and, no shit, taping their ends together to get the computer to execute the same code again.)

You want to see hardcore programming? I’ll show you hardcore programming:

This is what real hardcore coders do. No compilers, no syntax checkers, just a teletype machine and a bunch of fucking switches that change the computer’s memory and registers directly.

And you know what? For her, that was luxury. She and all the other early computer programmers–almost all of whom were women, by the way–started out programming by plugging patch cords into plugboards, because that’s how they rolled, motherfucker. Fuck keyboards, fuck front-panel switches…those things were soft. If you wanted to code back then, you needed a postgraduate degree in mathematics, an intimate understanding of every single component inside the computer, and the ability to route data with your bare fucking hands.

Grace Hopper was so badass that when she retired from the military, Congress passed a special act to bring her back. Twice. And then when she retired for real (for the third time), the Navy named a guided missile destroyer after her.

Trust me when I say you will never be this badass, bro.

So the next time you see something like this:

and you think that girls can’t code, just remember girls invented coding. And invented the tools that finally let softies like you play at being programmers. They did the heavy lifting so programming could be easy enough for noobs like you.

Of Android, iOS, and the Rule of Two Thousand, Part II

In part 1 of this article, I blogged about leaving iOS when I traded my iPhone for an Android-powered HTC Sensation 4G, and how I came to detest Android in spite of its theoretical superiority to iOS and came back to the iPhone.

Part 1 talked about the particular handset I had, the T-Mobile version of the Sensation, a phone with such ill-conceived design, astronomically bad build quality, and poor reliability that at the end of the year I was on my third handset under warranty exchange–every one of which failed in exactly the same way.

Today, in Part 2, I’d like to talk about Android itself.


When I first got my Sensation, it was running Android 2.3, code-named “Gingerbread.” Android 3 “Honeycomb” had been out for quite some time, but it was a build aimed primarily at tablets, not phones. When I got my phone, Android 4 “Ice Cream Sandwich” was in the works, ready to be released shortly.

That led to one of my first frustrations with the Android ecosystem–the shoddy, patchwork way that operating system updates are released.

My phone was promised an update in the second half of 2011. This gradually changed to Q4 2011, then to December 2011, then to January 2012, then to Q1 2012. It was finally released on May 16 of 2012, nearly six months after it had been promised.

And I got off lucky. Many Motorola users bought smart phones just before the arrival of Android 4; their phones came with a written guarantee that an update to Android 4 would be published for their phones. It never happened. To add insult to injury, Motorola released a patch for these phones that locked the bootloader, rendering the phone difficult or impossible to upgrade manually with custom ROMs–so even Android enthusiasts couldn’t upgrade the phones.

Now, this is not necessarily Google’s fault. Google makes the base operating system; it is the responsibility of the individual handset manufacturers to customize it for their phones (which often involves shoveling a lot of crapware and garbage programs onto the phone) and then release it for their hardware. Google has done little to encourage manufacturers to backport Android, nor to get manufacturers to offer a consistent user experience with software updates, instead leaving the device manufacturers free to do pretty much as they choose except actually fork Android themselves…which has led to what developers call “platform fragmentation” and to what Motorola Electrify, Photon and Atrix users call things I shan’t repeat in a blog as family-friendly as this one.

But what of the operating system itself?

Well, it’s a mixed bag of mess.


When I first got my Android phone, I noted how the user interface seemed to have been designed by throwing a box of buttons and dialogs and menus over one’s shoulder and then wired up wherever they hit. System settings were scattered in three different places, without it necessarily being obvious where you might find any particular setting. Functionality was duplicated in different places. The Menu button is a mess; it’s filled with whatever the programmer couldn’t find a better place for, with little thought to good UI design.

Android is built on Linux, an operating system that has a great future on the desktop ahead of it, and always will. The Year of Linux on the Desktop was 2000 was 2002 was 2005 was 2008 was 2009 was 2012 will be 2013. Desktop aside, Linux has been a popular server choice for a very long time, because one thing Linux genuinely has going for it is rock-solid reliability. When I was working in Atlanta, I had a Linux Gentoo server that had accumulated well over two years’ continuous uptime and was shut down only because it needed to be moved.

So it is somewhat consternating that Linux on cell phones seems rather fragile.

So fragile, in fact, that my HTC Sensation would pop up a “New T-Mobile Service Notice” alert every week, reminding me to restart the phone. Even the network operators, it would seem, have little confidence in Android’s stability.

It’s a bit disappointing that the one thing I most like about Linux seems absent from Android. Again, though, this might not be Google’s fault directly; the handset makers and network operators do this to themselves, by taking Android and packaging it up with a bunch of craplets of spotty reliability.

One of the things that it is really, really important to be aware of in the Android ecosystem is the way the money flows. You, as a cell phone owner, are not Google’s customer. Google’s customer is the handset manufacturer. You, as as a cell phone owner, are not the handset manufacturer’s customer. The handset manufacturer’s customer is the network operator. You are the network operator’s customer–but you are not the network operator’s only customer.

Because of this, the handset maker and the network operator will seek additional revenue streams whenever they can. If someone offers HTC money to bundle some crap app on their phones, HTC will do it. If T-Mobile decides it can get more revenue by bundling its own or someone else’s crap app on your phone, it will.

Not only are you not the customer, at some points along the chain–for the purposes of Google ad revenue, say–you are the product being sold. Whenever you hear people talking about “freedom” or “openness” in the Android ecosystem, never forget that.

I sometimes travel outside the US, mainly to Canada these days. When I do that, my phone really, really, really wants me to turn on data roaming.

There are reasons for that. When you roam, especially internationally, the telcos charge rates for data that would make a Mafia loan shark blush. So Android agreeably nudges you to turn on data roaming, and here’s kind of a sticking point…

Even if you’re connected to the Internet via wifi.

It pops up an alert constantly, and by “constantly” I really do mean constantly. Even when you have wifi access, it pops up every time you switch applications, every time you unlock the phone, and about every twenty minutes when you aren’t using the phone.

Just think of it as Google’s way to help the telcos tap your ass that revenue stream.

This multiple-revenue-streams-from-multiple-customers model has implications, not only for the economics of the ecosystem, but for the reliability of your phone as well. And even for the battery life of your phone.

Take HTC phones on T-Mobile (please!). They come shoveled–err, “bundled”–with an astonishing array of crap. HTC’s mediocre Facebook app. HTC Peep, HTC’s much-worse-than-mediocre Twitter client. Slacker Radio, a client for a B-list Internet radio station.

The presence of all the various crapware that comes preloaded on most Android phones, plus the fact that Android apps don’t quit when they lose focus, generally means that a task manager app is a necessary addition to any Android system…which is fine for the computer literate, but less optimal for folks who aren’t so computer savvy.

And it doesn’t always help.

For example, Slacker Radio on my Sensation insists on running all the time at startup, whether I want it to or not:

Killing it with the task manager never works. Within ten minutes after being killed, it somehow respawns, like a zombie in a George Ramero movie, shambling after you no matter how many times you shoot it:

The App Manager in the Android control panel has a function to disable an app entirely, even if it’s set to launch at startup. For reasons I was never able to understand, this did not work with Slacker. It was always there. Always. There. It. Never. Goes. Away. You. Can’t. Hide. From. It.

Speaking of that “disable app” functionality…

Oh, goddamnit, no, I don’t want to turn on data roaming. Speaking of that “disable app” functionality, use it with care! I soon learned that disabling some bundled apps can have…unfortunate consequences.

Like HTC Peep, for instance. It’s the only Twitter client for smartphones I have yet found that is even worse than the official Twitter client for smartphones. It loads a system service at startup (absent from the Task Killer screenshots above because I have the task killer set not to display system services). If you let it, it will download your Twitter feed.

And download your Twitter feed.

And download your Twitter feed. It does not cache any of the Twitter messages you read; every time you start its user interface, it re-downloads the whole thing again. The result, as you might imagine, is eyewatering amounts of data usage. If you aren’t one of the lucky few who still has a truly unmetered data plan, think twice about letting Peep have your Twitter account information!

Oh, and don’t try to disable it in the application control panel. If you do, the phone’s unlock screen doesn’t work any more, as I discovered to my chagrin. Seriously.

The official Twitter app isn’t much better…

…but at least it isn’t necessary to unlock the damn phone.

All this crapware does more than eat memory, devour bandwidth, and slow the phone down. It guzzles battery power, too. One of the default Google apps, Google Maps, also starts a service each time the phone boots up, and man, does it hog the battery juice…even if you don’t use Maps at all. (This screen shot, for instance, was taken at a point in time when I hadn’t touched the Maps app in days.)

You will note the battery is nearly exhausted after only four hours and change. I eventually took to killing the Maps service whenever I restarted the phone, which seems to have improved the HTC’s mediocre battery life without actually affecting Maps when I went to use it.

Another place where Android’s lack of a clear and consistent user interface–

AAAAARGH! NO! NO, YOU PATHETIC FUCKING EXCUSE OF A THING, I DO NOT WANT TO TURN ON DATA ROAMING! THAT’S WHY I SAID ‘NO’ THE LAST 167 TIMES YOU ASKED! SO HELP ME, YOU ASK ME ONE MORE TIME AND I WILL TIP YOU STRAIGHT INTO THE NEAREST EMERGENCY INTELLIGENCE INCINERATOR! @$#%%#@!

Sorry, where was I?

Oh, yes. Another place where Android’s lack of a clear and consistent user interface is its contact management, which is surely one of the more straightforward bits of functionality any smart phone should have.

Android gives you, or perhaps “makes you take responsibility for,” a level of granularity of the inner workings of its contact database that really seems inappropriate.

It makes distinctions between contacts which are stored on your SIM card, contacts which are stored in the Google contact manager (and synced to the Google cloud), and contacts which are stored in other ways. There are, all in all, about half a dozen ways to store contacts–card, Google cloud, T-Mobile cloud, phone memory card. They all look pretty much the same when you’re browsing your contacts, but different ways to store them have different limitations on the type of data that can be stored.

Furthermore, it’s not immediately obvious how and where any particular contact is stored. Things you might think are being synced by Google might not actually be.

And worse, you can’t, as near as I was ever able to tell, export all your contacts at once. Oh, you can export them, all right; Android lets you save them in a .vcf file which you can then bring to another phone or sync with your computer. But you can’t export ALL of them. You have to choose which SET you export: export all the contacts on your SIM card? Export all your Google contacts? Export all your locally-saved-on-the-phone-memory-card contacts?

When I was in getting my second warranty replacement phone, I asked the technician if there was an easy way to take every contact on the phone and save all of them in one export. He said, no, there really isn’t; what he recommended I do was export each group to a different file, then import all those files to my Google contact list, and then finally delete all the duplicates from all the other contact lists.

It worked, but seriously? This is stupid user interface design. It’s a user interface misfeature you might not ever encounter if you always (though luck or choice) save your contacts to the same set, but if for whatever reason you haven’t, God help you.

Yes, I can see why you might want to have separate contact lists, stored and backed up separately. No, that does not excuse the lack of any reasonable way to identify, sort, and merge those contact lists. C’mon, Google engineers, you aren’t even trying.

And speaking of brain-dead user interface design, how about this alert?

What the fuck, Google?

Okay, I get it, I get it. WiFi sharing uses a lot of battery power. The flash uses battery power. Android is just looking out for my best interests, trying to save my battery…

…but don’t all the Fandroids carry on about how much better Android is because it doesn’t force you to do what it thinks is best for you, it lets you decide for yourself? Again I say, what the fuck, Google?


So far, I have complained mostly about the visible bits of Android, the user interface failings and design decisions that demonstrate a lack of any sort of rigorous, cohesive approach to UI design.

Unfortunately, the same problems apply to the internals of Android, too.

One early design decision Google made in the first days of Android concerns the way it handles screen redraws. Google intended for Android to be portable to a wide range of phones, from low-end phones to full-featured smartphones, and so Android does not make use of the same level of GPU acceleration that iOS does. Instead, it uses the CPU to perform many drawing tasks.

This has performance and use implications.

User interface drawing occurs in an application’s main execution thread and is handled primarily by the CPU. (Technically speaking, each element on the screen–buttons, widgets, and so on–is rendered by the CPU, then the GPU handles the compositing.) That means that applications will often block while screen redraws are happening. On HTC Sense, for instance, if you put a clock on the home screen and then you start switching between screens, the clock will freeze for as long as your finger is on the screen.

It also means that things like populating a scrolling list is far slower on Android than it is on iOS, even if the Android device has theoretically better specs. Lists are populated by the CPU, and when you scroll through a list, the entire list is redrawn with each pixel it moves. On iOS, the list is treated as a 2D OpenGL surface; as you scroll through it, the GPU is responsible for updating it. Even on smartphones with fast processors, this sometimes causes noticeable UI sluggishness. Worse, if the CPU is interrupted by something else, like updating a background task or doing a memory garbage collect, the UI freezes for an instant.

Each successive version of Android has accelerated more graphics functions. Android 4 is significantly better than Android 2.3 in this regard. User input can still be blocked during CPU activity, and background tasks still don’t update UI elements while a foreground thread is doing so (I was disappointed to note that in Android 4, the clock still freezes when you swap pages in HTC Sense), but Android 4’s graphics performance is way, way, waaaaaaay better than it was in 2.3.

There are still some limitations, though. Because UI updates occur in the main execution thread, even in Android 4, background tasks can still end up being blocked while UI updates are in effect. This actually means there are some screen captures I wanted to show you, but can’t.


One place where Android falls down compared to iOS is in its built-in touch keyboard. Yes, hardcore geeks prefer physical keyboards, and Android was developed by hardcore geeks, which might be part of the reason Android’s touch keyboard is so lackluster.

One problem I had in Android 2.3 that I really, really hoped Android 4 would fix, and was sad to note that it didn’t, is that occasionally the touch keyboard just simply does not work.

Intermittently, usually once or twice a day, I would bring up an app–the SMS messenger, say, or a notepad, or the IMO IM messenger, and I’d start typing. The phone would buzz on each keypress, the key would flash like it does…but nothing would happen. No text would be entered.

And I’d quit the app, and relaunch it, and everything would be fine. Or it wouldn’t, and I’d quit and relaunch the app again, and if it still wasn’t fine, I’d reboot the phone, and force quit Google Maps in the task manager, and everything would be fine.

I tried very hard to get a screen capture of this, but it turns out the screen capture functionality doesn’t work when your finger is on the touch keyboard. As long as your finger is on the keyboard, the main execution thread is busy drawing, and background functions like screen grabs are blocked.

Speaking of the touch keyboard, there’s one place iOS really shines over Android, and that’s telling where your finger is at on the screen.

That’s harder than it sounds. For one, the part of your finger that first makes contact with the screen might not be where you think it is; it’s not always right in the middle of your finger. For another, when your finger touches the screen, it’s not just a single x,y point that’s being activated. Your finger is big–when you have a high-resolution screen, it’s bigger than you think. A whole lot of area on the touch screen is being activated.

So a lot more deep programming voodoo goes on behind the scenes to figure out where you intended to touch than you might think.

The keys on an iPhone touch keyboard are physically smaller on the screen than they are on an Android screen, and Android screens are often bigger than iOS screens, too. You’d think that would mean it’s easier to type on an Android phone than an iPhone.

And you’d be wrong. I have found, consistently and repeatably, that my typing accuracy is much better on an iPhone than an Android phone, even when the Android phone has a bigger screen and a bigger keyboard. (One of my friends complains that I have fewer hilarious typos and bizarre autocorrects in my text messages now, since I switched back to the iPhone.)

The deep voodoo in iOS appears to be better than the deep voodoo in Android, and yes, I calibrated my touch screen in Android.

Now, you can get third-party keyboards on Android that are much better. The Swiftkey keyboard for Android is awesome, and I love it. It’s a lot more sophisticated than any other keyboard I’ve tried, no question.

But goddamnit, here’s the thing…if you pay hundreds of dollars for a smart phone with a built-in touch keyboard, you shouldn’t HAVE to buy a third-party keyboard to get good results. Yes, they exist, but that does not excuse the pathetic performance of the stock Android keyboard! It’s like saying “Well, this new operating system isn’t very good at loading files, but that’s not a problem because you can buy a third-party file loader.” The user Should. Not. Have. To. Do. This.

And even if you do buy it, you’re still not paying for the amount of R&D that went into it. It’s a losing proposition for the developer AND for the users.


My new iPhone included iOS 6, which feels much more refined than Android on almost every level.

I would be remiss, however, if I didn’t mention what a lot of folks see at the Achille’s heel of iOS: its Maps app.

Early iPhones used Google Maps, a solid piece of work that lacked some basic functionality, such as turn-by-turn directions. When I moved to Android, I wrote about how the Maps app in Android was head, shoulders, torso, and kneecaps above the Maps app in iOS, and it was one of the best things about Android.

And then Android 4 came along.

I don’t know what happened to Maps in Android 4. Maybe it’s just a problem on the Sensation. Maybe it’s an issue where the power manager is changing the processor clock speed and Maps doesn’t notice. I don’t know.

But in Android 4, the cheery synthesized female voice that the turn-by-turn directions used got a little…weird.

I mean, it always was weird; you should hear how it pronounces “Caesar E. Chavez Blvd” (something Maps in iOS 6 pronounces just fine, actually). But it got weirder, in that it would alternate between dragging like a record player (does anyone remember those?) with a bad motor and then suddenly speeding up until it sounded like it was snorting a mixture of helium and crystal meth.

It was a bit disconcerting: “In two hundred feet, turn llllllllllleeeeeeeeeeffffffffftttttttt oooooooooonnnnnnnnn twwwwwwwwwwwwweeeeeeeeeeennnnnnnnttttyyyyyyyy–SECONDAVENUEANDTHENTURNRIGHT!” There was never a rhyme or reason to it; it never happened consistently on certain words or in certain places.

Now, Maps on iOS has been slammed all over Hell and back by the Internetverse. Any mapping program is going to have glitches (Google places a street that a friend of mine lives on about two and a half miles from where it actually is, in the middle of an empty field), but iOS apparently has a whole lot of very silly errors.

I say “apparently” because I haven’t personally encountered any yet, knock on data.

It was perhaps inevitable that Apple should eventually roll their own app (if by “roll their own” you mean “buy map data from Tom Tom”), because Google refused to license turn-by-turn mapping to Apple, so as to create a product differentiation point to make bloggers like me say things like “Wow, Google’s Android Map app sure is better than the one on iOS!” That was a strategy that couldn’t last forever, and Google should have known that, but… *shrug* Whatever. Since Google lost the contract to supply the Maps app to Apple, they took a hit larger than their total Android revenue; if they want to piss it away because they didn’t want Apple to have turn-by-turn directions, I think they really couldn’t have expected anything else.

In part 3 of this thing, I’ll talk about T-Mobile, and how they’re so hopelessly dysfunctional as a telecommunication provider they make the North Korean government look like a model of efficiency.

The Birth of a Meme, or, Why I love the Internet

As the American electorate went through the motions of choosing a candidate of someone else’s choosing this week, the Internetverse was alive with political commentary, flames, racial epithets, and all the other things that normally accompany an American campaign season.

At the height of the election, Twitter was receiving 15,107 tweets per second…an eyewatering amount of data to handle, especially if you’re a company with little viable revenue stream other than “get venture capital, spend it, get more venture capital.”

Some of those tweets were tagged with the #romneydeathrally hashtag, and for a few days, how the Internet did shine.

If you do a search on Twitter for #romneydeathrally, you’ll find some of the finest group fiction ever written. The Tweets tell a strange, disjointed account of a political rally straight out of Lovecraft, with bizarre rites taking place on stage and eldritch horrors being summoned to feed on the crowd.

The hash tag went on for days, the Internet hive-mind creating an elaborate communal vision of a dark supernatural rally filled with horrors.

I even got in on the action myself:

Eventually, it caught the attention of the media. The Australian Hearld Sun ran an article about the hash tag that painted an interesting narrative of the meme:

In further evidence that Democrats are winning the social media war, hundreds of people have taken to Twitter to “report” on a fictional event where Republican Presidential hopeful Mitt Romney has called upon satanic powers in a last ditch effort to swing the election in his favour.

DigitalSpy has their own take on the meme, also saying Twitter users are talking about Mitt Romney calling upon Satanic powers.

When H. P. Lovecraft references get labeled as “Satanic powers,” I weep for the lost literacy of a generation…but I digress.

By far the most bizarre response to the meme was posted by Twitter user @nessdoctor over on Hashtags.org with the title “Twitter Users Threaten Mitt #RomneyDeathRally”. According to Ms. Doctor,

The hasthag #RomneyDeathRally trended after tweets spread placing Presidential candidate Mitt Romney (@MittRomney) of the Republican party under the light of resorting dark satanic techniques to win the upcoming US national elections on November 6, 2012.

This is, of course, a nasty hashtag and while its purveyors insist it’s for humor (and sometimes it is), it is done in bad taste. […]

There were also posts that threatened to kill Romney, with some even threatening to join domestic terrorism and attack the White House and the people in it if Romney sits as president.

The article has been rewritten a number of times; at first, it stated that the hashtag was all about threats to kill Romney and his family, then it made the strange claim that the hash tag came about after rumors had spread that the Romney campaign was trying to use Satanism to win the election. For a while, the article had screen captures of threats against Romney with a caption claiming the threats were part of the #romneydeathrally hash tag; that claim has since been dropped. I have no idea what the article will say if you, Gentle Readers, should visit it.

But where did it come from? (I’ll give you a hint: it didn’t start because of rumors of Satanism.)

Like most Internet memes, the #romneydeathrally hashtag craze started small. On November 4, Mitt Romney held a campaign rally in Pennsylvania. For whatever reason, the rally was late getting started, it was cold, and some people who were there complained on Twitter that Romney campaign staffers were refusing to permit them to leave the rally, citing unspecified “security” concerns.

Some of these tweets were picked up by reporters covering the event.

It didn’t take long to turn into a public relations disaster. Some folks started talking about the “death rally” that you could never leave on Twitter, and the #romneydeathrally hashtag was born.

Naturally, the Internet being what it is, it really didn’t take long for some folks to decide they’d ride that train to the last station:

And, inevitably, Lovecraft got involved. Because if there’s one thing you can count on about the Internet, it’s por–okay, if there are two things you can count on about the Internet, one of them is that the Internet will always insert references to Lovecraft and Cthulhu wherever it possibly can.

And thus the meme was born.

It had nothing to do with threats on Romney, nor with rumors that the Romney campaign was dabbling in Satanism. Instead, it was the Internet doing what the Internet does: seizing on something that happened and taking it to an absurd conclusion.

The Romney Death Rally was a PR own-goal for the Romney campaign, sparked by staffers doing something really stupid at a rally.

There are two lessons here. The first is that if you’re a prominent politician and you’re hosting a rally, it’s probably a bad idea to refuse to allow people to leave. People have cell phones, and Twitter, and some of them will complain, and their complaints might be heard.

The second, though, is less about politics than it is about news reporting. For the love of God, if you have a journalism degree, you should be able to recognize a reference to the Cthulhu mythology when you see it.

Of Android, iOS, and the Rule of Two Thousand, Part I

A year and change ago, I traded in my iPhone 3G for an Android phone.

I blogged about my initial experience and first impressions of Android here. The phone I got was a then top-of-the-line HTC Sensation 4G, which was at the time I got it T-Mobile’s flagship Android phone. And for a short while, I quite liked it.

A lot can change in a year. When the new iPhone comes out in a couple of weeks, I plan to jump back to iOS and never look back.

Before I go any further, I should take a moment to step back and talk about how I feel about computing devices. I’ve been using computers since the days of the TRS-80; I got my first computer in 1977. And computer Holy Wars have been around for just as long. Back then, it was the TRS-80 vs. the Apple II vs. the Commodore 64; today, it’s Windows vs. Mac vs. Linux. Same song, different dance. What’s amazing to me is that even the arguments haven’t changed very much.

A lot of it, I reckon, comes from good old-fashioned need for validation. When you get a computer or a smartphone, you’re actually buying into an entire ecosystem, one that has a relatively high cost of entry (it takes time–quite a lot of it–to learn an operating system, and if you buy any software, you’re locked in at least to some extent to your choice. Sure, you can do what I do and run Mac OS, Windows, and Linux side by side in virtualization, but doing that has a significant barrier to entry of its own; it’s not what typical home computer users do.)

It;s hard to admit that when you’ve just spent a lot of dosh on a new box and crawled up that painful learning curve to teach yourself how to use it, you might have made a mistake. So people validate their choices, largely by convincing themselves of how awful the alternative is.

I’ve been using (and programming) Microsoft-run boxes since the days of MS-DOS 2.11 and Macs since System 1.1. In that time, I’ve developed a principle I call the Rule of 2,000, which put simply says that anyone with less than 2,000 hours’ worth of actual, real-world, hands-on experience with some platform or operating system is completely unqualified to hold an opinion about it, and anything they say about it can be safely disregarded.

So now I have a years’ worth of Android experience under my belt. What have I learned from it? Well, I’m glad you asked.


PART I: THE HARDWARE

Let’s start with the phone itself. My HTC Sensation, on paper, looks a lot better than an iPhone. It has a larger screen, a significantly better camera than what was available from Apple at the time, a replaceable SD-Micro card that means upgrading storage is quick and easy to do, and a 4G LTE data connection. By the specs, it is a phone significantly superior to the iPhone at the same time.

One of the problems that computer–and, lately, cell phone–Holy Warriors have never quite grasped, though, is that technical specs don’t tell the whole story. In fact, tech specs by themselves don’t make for a compelling product at all, except perhaps to a handful of rabid geeks. Steve Jobs grokked this. Geeks don’t.

The HTC Sensation suffers from a number of design flaws, probably the result of engineering choices designed to keep costs down.

When you hold a Sensation and an iPhone, the Sensation feels cheap. It has a removable cover, which allows easy replacement of the battery…but the cover isn’t especially tight and doesn’t fit as well as it could, making the phone feel a bit creaky. It’s plastic rather than metal and industrial glass. Geeks will claim that the packaging doesn’t matter, but they’re wrong; even the most hardcore geek would be unlikely to buy a computer housed in a plain cardboard box.

More importantly, though, I am currently on my third HTC Sensation, in a bit over one year.

When I got the Sensation, zaiah urged me to pay for the unlimited replacement warranty, and I’m glad I did. The phone has failed twice on me, both times in exactly the same way. First, the GPS starts acting flaky, taking longer and longer to acquire a signal. Then, the phone starts getting really hot when the GPS radio is on. Finally, the GPS radio fails completely, and any attempt to run a program that uses the GPS causes the phone to either freeze so hard I had to take the battery out to reset it, or crash and reboot.

I quickly got accustomed to seeing these screens in this order:

Those of you who have met me in person know that I have the navigational sense of a drunken baboon on acid; when I don’t have GPS, it is a Very Big Deal. The second phone’s GPS finally failed completely while I was on my way to a distant city a couple hours’ drive from home to meet with a new sweetie, and probably cost me at least an hour and a half spent with her…but I digress.

You will note that the signal bars in these screenshots are all over the map. This has been an unending part of my experience with Android, though I think it’s more down to T-Mobile than to Android itself. T-Mobile advertises full 4G coverage in Portland, and that’s technically true, though there are more holes in that coverage than there are in Ayn Rand’s understanding of American history. I can be traveling down Stark street right outside my house and go from awesome signal to no signal and back again in the span of six blocks. At one friend’s house, I have zero coverage, but at the corner shop down the street, I have four bars. WTF, T-Mobile?

Now, it’s possible I’m a statistical fluke and there’s nothing intrinsically wrong with the GPS radio in the Sensation. However, when I took the second failed phone into the T-Mobile store to request a replacement, the bearded hipster behind the counter told me his Sensation had the same fault as well, so I doubt it.


WAIT FOR IT…WAIT FOR IT…

An issue this phone has always had since Day 1 is a perceived sluggishness and general, overall lack of responsiveness.

I’m not 100% sure if this is a hardware or software issue. Certainly, the processor and RAM in this phone were both much better than in my iPhone 3G, so it should have plenty of grunt for a fluid UI. Yet using this phone often feels like trying to wade through frozen molasses in zero G. I saw, and still see, these messages frequently:

I tried rather a lot of faffing to make the phone more responsive (using a task killer to kill unnecessary processes and services, that sort of thing), and never got it to be good. The update from Android 2 to Android 4 was supposed to take care of a lot of this issue, but it would seem that “taking care of the issue” really meant “putting a prettier wait icon on the dialog.” (That’s Android 4 in the middle, up there.)

This is, I think, down to both hardware and software; a lot of the UI in iOS is hardware accelerated, because Apple makes the hardware and therefore can be sure that it will have the GPU to support hardware acceleration.

One interesting thing about Sense, HTC’s user interface: When you touch the screen, background processes and background updates to the UI are totally suspended. This means that, for example, when you start to slide from one panel to the next, the clock freezes. It also means you can’t do screen captures when you have your finger on the screen–something that’s actually significant, and that I’ll get to in part 2 of this piece, where I talk about the software.


OH, WHO’S A DIRTY PHONE? YOU ARE! YOU DIRTY, DIRTY PHONE!

Most of the time, I keep my phone in my pocket.

As it turns out, with the Sensation, that’s not a very wise thing to do.

The Sensation, like nearly every other smartphone I’ve used, has a little wake/sleep button on the top. You press it to wake the phone up. With the Sensation, the button’s mechanism is part of the back case, which wraps around the top; the button is just a little bit of plastic that presses down on the actual switch, mounted to the phone’s circuit board.

The plastic bit isn’t well sealed against dust and debris. When I say “isn’t well sealed,” what I mean by that is “isn’t sealed at all.”

Now, maybe the engineers who designed it have Class 5 cleanrooms in their pants. I don’t know. I do know that my pants are a considerably less clean environment.

In practice, what that means is that little bits of dust and grit get into that button, gradually rendering it inoperable. There’s a ritual I have to go through every couple of months: take the back off, blow all of the crap out of that little button, put the back on again. This is not something I experienced with my iPhone, despite years of carrying it in some astonishingly grungy pockets.

Even if you do have a Class 5 cleanroom in your pants, you’re still not well-advised to carry your Sensation there, because of an odd quirk the phone has which I’ve never been able to figure out.

Well, perhaps it’s less a quirk than a habit. Every so often, usually a few times a week, the phone will suddenly start heating up, until it becomes uncomfortably warm. All three of my Sensations have done this.

I’ve never found a pattern to it. It can happen when the cell signal is weak or strong. It can happen when the phone is on 4G or WiFi. It happens with no discernible background activity going on. There seems to be no rhyme or reason to it. I’ll just be riding in the car or sitting in front of the computer watching Netflix or hanging out with a bunch of friends, and wham! My pants are scorching hot. Rebooting the phone usually, but not always, solves the problem.


Technical specs do not, of and by themselves, make for desirable hardware. I really, really wish more people understood this.

Most of my complaints about the hardware of the Sensation come down to the same thing: attention to detail. Whether it’s attention to detail in the switch or attention to detail in the user interface, detail matters.

Geeks love hardware specs. DGeeks drool over the newest processor with twenty-four overclocked turboencabulators per on-die core and hardware twiddlybits with accelerated inverse momentum. And I think that’s a problem, because they don’t get that hardware specs by themselves aren’t enough.

Attention to detail is harder. It’s not enough to have the fastest possible processor in your phone, if the user interface is sluggish. It doesn’t matter if the phone has a shiny OLED backlight if dirt and grit keep getting into it because nobody paid close attention to the little plastic button on top.

Android is in a lot of ways the triumph of the geek over the designer. True Believers like to brag that Android outsells iOS phones because the geek cred of Android is so much better; personally, I suspect that it might have something to do with the fact that you can buy an Android phone for about $75 without a contract, and get one for free with a contract, from a large number of different places.

But that’s not really the issue. The issue, as I see it, is that my Sensation is clearly a superior phone on paper to my old iPhone, but the experience of owning it has left a very bad taste in my mouth.

Detail matters. Little things matter. The Android contingent of the Holy Warriors had an opportunity to make me a convert, and they failed.

In the next part, I’ll talk about the software, and how even after several major revisions, Android still has some things it can learn from iOS.

Some Thoughts on Design and Humane Computing

A couple of weeks ago, someone on a programming mailing list that I read asked for advice on porting a Windows program he’d written over to the Mac. Most of the folks on the list, which is dedicated to Windows, Linux, and Mac software development, advised him that simple ports of Windows software generally tend to fare poorly on the Mac. Mac users tend not to like obvious ports from the Windows world, and several folks suggested that he might need to do some rejiggering of his program;s interface layout–moving buttons, repositioning alert icons, and so on–so that they fit the Mac guidelines better.

Which is true, but incomplete, and misses what I think is a really important point about software design. Or any kind of design, for that matter.


Right now, as I type this, Apple and Samsung are involved in a nasty patent spat concerning infringement of certain Apple user interface patents for cell phones. A lot of folks commenting from the sidelines on the spat tend to paint Apple as a villain, usually on the grounds that the patents in question (which generally relate to things like how searches work and so on) are “obvious,” and therefore shouldn’t be patentable at all.

Leaving aside entirely the question of whether or not Apple is the bad guy, the fact that so many folks deride the user-interface patents in question as “obvious” demonstrates a couple of important principles.

The first is that many computer geeks don’t understand design, and because they don’t understand design, they have contempt for it. (It is, unfortunately, a very common trait I’ve noticed among geeks, and particularly computer geeks, to assume that if they lack some particular skill, it’s only because that skill is trivial and not really worth bothering about.)

The second is that people tend not to pay attention to design unless it’s bad. Good design always looks obvious in hindsight, when it is noticed at all.


Today, touch-screen smartphones have generally settled on the same overall user interface idea: a series of virtual pages, accessed by swiping, which contain icons that can be touched to launch applications. But it wasn’t so long ago that such a simple and obvious user interface was unknown. Case in point: The first Windows CE devices.

The Windows CE-based smartphones used the same metaphor as Windows desktop systems: a “desktop” onto which you could place icons, and a tiny “start” menu in the corner of the screen which you would touch with a stylus or move a virtual mouse pointer over with a set of arrow keys or a rocker button to bring up a menu of applications.

This user interface succeeds on desktops but is an abject, epic failure on small screen devices because it simply isn’t designed for a different usage environment. Yet this, and things like it, were the norm for handheld devices for years, because nobody had come up with anything better. Nowadays we look at Android or iOS and marvel that anyone could be so dumb as to attempt the Windows desktop interface on a phone. Good design always looks obvious in hindsight.


So back to the mailing list.

Several of the responses the guy who wanted to port his software received concerned learning things like the ‘correct’ button placement and icon size on Mac systems. But that does not, I think, really address the central problem, which is that Mac users (and I know I’m going to get some flak for saying this) are accustomed to a higher level of design than Windows users are.

And there’s more to design than how big the icons are or where the buttons are placed. Way too many people have this notion that design is something you bolt onto an application after it’s finished; you make the program do what it should do, and then you call Joe the graphics guy from the other side of the building, who isn’t a real programmer but knows how to do some graphics stuff to make it all look pretty.

Back in the early days of the Mac, Apple released a rather hefty book called “Macintosh Human Interface Guidelines.” I had a copy of it for a long time. It’s quite thick, and covers almost every aspect of user interface design. Yes, there are a lot of bits about how many pixels wide an icon should be and where a button should be placed on a window, but it goes way beyond that, into program flow, error handling, and a lot more.

It’s a book I think all programmers should read, regardless of what environment they program for.

I don’t think Windows has ever had an equivalent to this book. Window prior to Windows 95 didn’t seem to have any such book, at least not that I can find. The earliest published document I can find for Windows was produced in 1995, and was quite short, covering nowhere near the depth of program design as the Mac version. A PDF is available here. I’m pretty sure Linux hasn’t either, though individual user interface shells may. (Gnome has one, and so does KDE; Unity seems not to.) And I think that helps contribute to the contempt that many programmers have for design, and to the notion that design is “pretty pictures that you put into the dialogs after the program is done.”


I wrote a reply on the list outlining some of the difficulties Windows programmers face when trying to port to the Mac. The considerations do include where to position user interface elements on the screen, of course; Mac programmers expect a certain consistency. But there’s a lot more to it. Here’s what I wrote:

The issue with Mac software isn’t one of following a list of guidelines, in my experience, so much as one of practicing good design.

The principles in the Apple Human Interface Guidelines tend to promote good design, but there are many applications that don’t follow them (even applications from Apple) yet still give the ‘user experience’ that Mac users want. It’s about good, thoughtful, humane design, not about how big the buttons are or what fonts are used or how many pixels away from the edge of the window the buttons are located.

“Design” is a difficult concept, and one that a great many programmers–even good programmers–don’t have a good grasp of. There are a lot of terrible applications out there (on all platforms), though in the years I’ve been using Macs, Windows, and Linux I’ve found that Mac apps generally tend to be better designed than apps for the other two platforms. Indeed, Linux in particular tends to reward inhumane application design, enshrining programs with great power but also with an obtuse, cumbersome, and heavy user interface that is opaque to anyone without a thorough understanding of the software. EMACS is arguably one of the greatest examples of software utterly divorced from humane design. (Before anyone accuses me of engaging in partisan holy wars, I started using MS-DOS at version 2.11, Windows at 3.0, and Macs at System 1, and I’ve been using Linux since about 1998. I first came to EMACS on a DECsystem-20 running TOPS-20; before that, I used TECO on a PDP-11.)

Humane application design extends way beyond pretty pictures in the splash screen and memorizing lists of rules about where to put buttons on a screen. The principles of humane design are probably outside the scope of one email on an email listing, but they include things like:

Clarity. A well-designed user interface strives, as far as is reasonably possible, for simplicity, obviousness, and clarity. Functions presented to the user should be done in a logical and comprehensible manner, with similar functions presented in similar ways and available options described in the clearest possible language.

Consistency. Different areas of the software’s human interface should be designed, as far as is possible, to be both visually and functionally similar. If the user changes from one mode to another, she should not be presented with a jarringly different interface that is arranged entirely differently. Functions that are common to all areas or modes of the software should continue to work in the same way. The Microsoft Office suite is an example of a set of programs with poor consistency; in each of the parts of the suite, the same functions are often located in different places, under different menu items.

Predictability. Humane software does not modify or delete the user’s information without the user’s express permission. Consequences of user action, especially action that might involve loss of data, should be clearly communicated. User choice should be presented in a way that clearly communicates the results of the choice; for example, an inhumane, poorly-designed dialog box might read “A network error occurred” with buttons reading “OK” and “Cancel,” as the user is presented with no clear way to predict what pressing each of those buttons will do.

Ideally, buttons should be labeled verbs, which help to communicate the consequences of making a selection as rapidly as possible. It’s not great design to have a dialog box reading “A network error occurred; try again?” with buttons labeled “Yes” and “No.” Better is a dialog box with buttons labeled “Try Again” and “Disconnect.”

Clear communication. There’s a great example of this in the Apple Human Interface guidelines. A poorly-designed error message for a text entry field might read “Improper data format entered.” A better error message might read “”Numeric entry only.” A well-designed error message might read “The ZIP code must be five numbers or five numbers with a dash and four numbers.” The software communicates what is expected in a way that is easy for the user to understand, even when (in fact, especially when) an error condition is encountered.

Resilience. The design of the software should strive, as far as is possible, to preserve user input and user data even in the event of some sort of error condition. This means, for example, that the software will not discard everything the user has entered up to that point if the user types an incorrect ZIP code; the software will not lose the user’s input without warning if the user leaves one mode and enters another mode (for example, if the user types part of a shipping address, then backs up a screen to change the discount code she has entered), and the software will always make it clear if data will be or have been lost.

Forgiveness. The user interface should, as far as is possible, be designed to forgive mistakes. This includes such obvious things as Undo functionality, which in this day and age even the most inhumane software implements because it’s become part of the cultural set of expectations from any software. Better implementations include the ability to Undo after the user has done a Save or a Revert to Saved (Adobe applications consistently implement this). Humane software will not irrevocably destroy a user’s data at the click of a wrong button, will attempt insofar as is possible to recover data in the event of a crash (applications like Microsoft Word are quite good at this, though it’s not always technically possible in, say, large graphics editing apps).

Familiarity. Good design does not have to be beholden to the past, but if you’re presenting the user with a completely unfamiliar experience, expect resistance. When a person gets into a car, she expects certain things from the user interface; replacing the steering wheel and pedals with a joystick and the windshield with a holographic projector might be appropriate for a concept car or a science-fiction movie, but probably isn’t for the next-generation Chevy Lumina. If you change things about the expected user experience, make sure you have a clear and compelling reason to do so; don’t violate the user’s expectations merely because you can. This, unfortunately, is the only place where many programmers feel design is important, and is where rules such as the fonts used in buttons and the distance the buttons are placed from the edge of the window come into play.

Responsiveness. The application should be designed in such a way as to remain responsive to the user as often as possible in as many conditions as possible, and throw as few roadblocks in the user’s way as possible. This goes beyond simply shifting CPU-intensive operations into their own thread, and encompasses a number of architectural, coding, and human interface choices. For example, humane software is modeless wherever possible; use modal dialogs that block user activity only where absolutely necessary and where no other design decisions can be made. Make it clear what window or data is affected by a modal dialog (this is a place where I believe the design and implementation of Windows falls short, and the Mac’s “sheet” window is a significant human interface win.) If you must use a modal window, seek wherever possible to allow the user to clear the fault within the modal window, rather than forcing the user to dismiss the modal dialog and then go back a step to fix whatever the problem is.

There’s a lot more, of course, but the basic point here is that good design isn’t something that you glue onto a program with pretty icons and controls that follow all the rules. It’s something that has to be baked in to an application from the ground up, and for better or for worse it is my observation that the users’ expectations of good design techniques tend to be higher on Macs than on other systems.

If Microsoft Designed Facebook

About five or six years ago, before Microsoft decided they wanted a slice of the portable MP3 player pie and introduced the Zune, a video called “Microsoft Re-Designs the iPod Packaging” made the rounds of the Internettubes.

At the time, I was running a small consulting firm that shared office space with an advertising and design company, who was also my biggest client. I passed the video around the office, and it got quite a few chuckles. It’s spot-on what was, back then, Microsoft’s biggest marketing weakness: a colossal, sometimes hilarious, and always hamfisted incompetence in all matters of design. (Steve Jobs is reported to have once remarked “t’s not that Microsoft keeps stealing our ideas, it’s that they’re so ugly!)”

If you haven’t seen the video, it’s worth a look and a chuckle or two, even though it’s a bit outdated.

But I didn’t come here to talk about Microsoft. I came here to talk about Facebook.


Apparently, Facebook introduced a new design change today. I didn’t actually notice until someone called me up and asked my opinion on it; I rarely use Facebook. For the most part, it’s just a repository for my Twitter nattering. I hear it’s a big deal in some quarters, though, so I wandered over to take a look.

And my goodness, have they got things wrong.

Now, Facebook is ugly. Facebook has always been ugly. Most Web 2.0 properties are ugly. Web programmers, by and large, don’t understand design (or user interface), and like almost all computer people everywhere, they figure that anything that they don’t understand is not worth understanding, so they have contempt for design as well. To a Web 2.0 programming guru, design means making a pale blue banner with the name of the Web site and a line drawing of a logo or an animal or something on it and slapping it at the top of the page.

That’s not entirely the fault of the programmers, of course; the basic, fundamental structure of CSS discourages good design, just by making it more of a pain in the ass than it really needs to be. You can do good design in CSS, if you’re the sort of person who doesn’t mind doing linear algebra in your head while walking a tightrope stretched across the Grand Canyon with no net, and you don’t mind that it won’t render in Internet Explorer anyway…but I digress. Where was I again?

Oh, yeah. Facebook.

So. Facebook is a business, and a profitable one. Everything about it, from the back-end infrastructure to the HTML that appears on the home page, is about making money. That means that any analysis of anything they do, including changing their design, needs to be done through the lens of how it benefits Facebook financially. And the new design is clearly intended to do that.

Unfortunately, they take the same approach as Microsoft: throw everything that might make money (Third-party endorsements! Bullet points! Big colorful discount offers!) at the wall and see what sticks. Each individual design decision, by itself, has a financial goal…but the end result is a mess.

Good design is worth money, too. People gravitate toward it–and here’s an important bit–even if they don’t understand it. There are a lot of folks who hate Apple, but their design strategy works.

And the evidence is written all over the Web 1.0 wreckage. Take Yahoo’s home page (please!). Yahoo, desperate for money, decided to keep packing crap onto the home page. News, video ads, horoscopes, music, movie trailers…each element, by itself, either directly or indirectly brings in money.

Yet Yahoos proverbial clock has well and truly been cleaned by Google, whose home page is Spartan in its simplicity, and yet who makes money faster than the U.S. Mint can print it.

Design matters. Today’s Facebook looks like a social networking site designed by Microsoft in 2005, only creepier.


For me, it’s the creepiness factor that really does it.

I’m used to Web 2.0 being ugly. I’m resigned to it. Examples of beautiful Web 2.0 design are about as thin on the ground as snowmen in the Bahamas, and on some level I’ve simply accepted that and moved on.

But the new Facebook design? It’s like someone took Microsoft’s aesthetic and combined with with Google’s tentacular creepiness, and put the result in one place.

In the past, my Facebook wall was a chronology of what was going on in my friends’ lives. Now, I don’t answer most Facebook friends requests, unless they come from folks I know to one degree or another, and apparently that’s a bit unusual. But my Wall was useful; I could glance at it and see, roughly, what was going on in more or less chronological order, and that seemed like it worked just fine.

But now? The “top posts” on my wall come from Facebook’s attempt to understand me and my interests, and that’s a bit freaky. “Hmm, I wonder what Franklin might be interested in today? Let’s see if we can tease that out and then show him what we think he’ll want to see.”

It’s as if a stalker camped out on my doorstep, went through my garbage, read my mail, followed me around town, poured over my grocery receipts, made detailed lists of everyone I spoke to and when…

…all for the purpose of cutting up and rearranging my newspaper so that the articles he thought I’d like the best were on top.

So that, y’know, I would buy his newspaper.

Creepy.

And it gets creepier when I look at Facebook’s suggestions for my “close friends” list. Facebook not only wants, in its particularly stalkeriffic way, to know what sorts of subjects interest me, it also wants to know who my REAL best friends are. And not content just to ask me, it…makes suggestions.

Suggestions that world-class supercomputing infrastructure has been brought to bear on. Suggestions that involve analyzing every little telltale crumb of information I let it have.

Google, to be fair, is just as creepifyingly stalkeriffic as Facebook; it’s just (slightly) less in my face about it. Google stalks me to know what sorts of ads to present to my eyeballs; Facebook stalks me to make things easier for me.

Thanks, Mark “The Age of Privacy is Over” Zuckerberg. At least you’re refreshing in one sense; you’re one of the few business bigwigs who actually puts his words into action.


Since I started this with a video, it’s reasonable to end it with a video. It shows Steve Jobs, until recently the CEO of one of the most financially successful businesses in history, responding to an openly insulting question about his return to Apple with grace and dignity. Granted, he’s basically a sociopath, but the interesting bit is when he talks about prioritizing user experience over technical faffery. He’s another of the few business leaders who practices what he preaches, and I think the example of Apple Computer shows that priortizing design and user experience can be profitable too.

“You’ve got to start with the customer’s experience and work backwards from that.”