Some thoughts on government funding for research

Every time you buy a hard drive, some of your money goes to the German government.

That’s because in the late 1990s, a physicist named Peter Grünberg at the Forschungszentrum Jülich (Jülich Research Center) made a rather odd discovery.

The Jülich Research Center is a government-funded German research facility that explores nuclear physics, geoscience, and other fields. There’s a particle accelerator there, and a neutron scattering reactor, and not one or two or even three but a whole bunch of supercomputers, and a magnetic confinement fusion tokamak, and a whole bunch of other really neat and really expensive toys. All of the Center’s research money comes from the government–half from the German federal government and half from the Federal State of North Rhine-Westphalia.

Anyway, like I was saying, in the late 1990s, Peter Grünberg made a rather odd discovery. He was exploring quantum physics, and found that in a material made of several layers of magnetic and non-magnetic materials, if the layers are thin enough (and by “thin enough” I mean “only a few atoms thick”), the material’s resistance changes dramatically when it’s exposed to very, very weak magnetic fields.

There’s a lot of deep quantum voodoo about why this is. Wikipedia has this to say on the subject:

If scattering of charge carriers at the interface between the ferromagnetic and non-magnetic metal is small, and the direction of the electron spins persists long enough, it is convenient to consider a model in which the total resistance of the sample is a combination of the resistances of the magnetic and non-magnetic layers.

In this model, there are two conduction channels for electrons with various spin directions relative to the magnetization of the layers. Therefore, the equivalent circuit of the GMR structure consists of two parallel connections corresponding to each of the channels. In this case, the GMR can be expressed as

Here the subscript of R denote collinear and oppositely oriented magnetization in layers, χ = b/a is the thickness ratio of the magnetic and non-magnetic layers, and ρN is the resistivity of non-magnetic metal. This expression is applicable for both CIP and CPP structures.

Make of that what you will.


Conservatives and Libertarians have a lot of things in common. In fact, for all intents and purposes, libertarians in the United States are basically conservatives who are open about liking sex and drugs. (Conservatives and libertarians both like sex and drugs; conservatives just don’t cop to it.)

One of the many areas they agree on is that the governmet should not be funding science, particularly “pure” science with no obvious technological or commercial application.

Another thing they have in common is they don’t understand what science is. In the field of pure research, you can never tell what will have technological or commercial application.

Back to Peter Grünberg. He discovered that quantum mechanics makes magnets act really weird, and in 2007 he shared a Nobel Prize with French physicist Albert Fert, a researcher at the French Centre national de la recherche scientifique (French National Centre for Scientific Research), France’s largest government-funded research facility.

And it turns out this research had very important commercial applications:

You know how in the 80s and 90s, hard drives were these heavy, clunky things with storage capacities smaller than Rand Paul’s chances at ever winning the Presidency? And then all of a sudden they were terabyte this, two terabyte that?

Some clever folks figured out how to use this weird quantum mechanics voodoo to make hard drive heads that could respond to much smaller magnetic fields, meaning more of them could be stuffed on a magnetic hard drive platter. And boom! You could carry around more storage in your laptop than used to fit in a football stadium.

It should be emphasized that Peter Grünberg and Albert Fert were not trying to invent better hard drives. They were government physicists, not Western Digital employees. They were exploring a very arcane subject–what happens to magnetic fields at a quantum level–with no idea what they would find, or whether it would be applicable to anything.


So let’s talk about your money.

When it became obvious that this weird quantum voodoo did have commercial possibility, the Germans patented it. IBM was the first US company to license the patent; today, nearly all hard drives license giant magnetoresistance patents. Which means every time you buy a hard drive, or a computer with a hard drive in it, some of your money flows back to Germany.

Conservatives and libertarians oppose government funding for science because, to quote the Cato Institute,

[G]overnment funding of university science is largely unproductive. When Edwin Mansfield surveyed 76 major American technology firms, he found that only around 3 percent of sales could not have been achieved “without substantial delay, in the absence of recent academic research.” Thus some 97 percent of commercially useful industrial technological development is, in practice, generated by in-house R&D. Academic science is of relatively small economic importance, and by funding it in public universities, governments are largely subsidizing predatory foreign companies.

Make of that what you will. I’ve read it six times and I’m still not sure I understand the argument.

The Europeans are less myopic. They understand two things the Americans don’t: pure research is the necessary foundation for a nation’s continued economic growth, and private enterprise is terrible at funding pure research.

Oh, there are a handful of big companies that do fund pure research, to be sure–but most private investment in research comes after the pure, no-idea-if-this-will-be-commercially-useful, let’s-see-how-nature-works variety.

It takes a lot of research and development to get from the “Aha! Quantum mechanics does this strange thing when this happens!” to a gadget you have in your home. That also takes money and development, and it’s the sort of research private enterprise excels at. In fact, the Cato Institute cites many examples of biotechnology and semiconductor research that are privately funded, but these are types of research that generally already have a clear practical value, and they take place after the pure research upon which they rest.

So while the Libertarians unite with the Tea Party to call for the government to cut funding for research–which is working, as government research grants have fallen for the last several years in a row–the Europeans are ploughing money into their physics labs and research facilities and the Superconducting Supercollider, which I suspect will eventually produce a stream of practical, patentable ideas…and every time you buy a hard drive, some of your money goes to Germany.

Modern societies thrive on technological innovation. Technological innovation depends on understanding the physical world–even when it seems at first like there aren’t any obvious practical uses for what you learn. They know that, we don’t. I think that’s going to catch up with us.

Wrong in the age of Google: Memes as social identity

A short while ago, I published a tweet on my Twitter timeline that was occasioned by a pair of memes I saw posted on Facebook:

The memes in question have both been circulating for a while, which is terribly disappointing now that we live in the Golden Age of Google. They’re being distributed over an online network of billions of globally-connected devices…an online network of billions of globally-connected devices which lets people discover in just a few seconds that they aren’t actually true.

A quick Google search shows both of these memes, which have been spread across social media countless times, are absolute rubbish.

The quote attributed to Albert Einstein appears to have originated with a self-help writer named Matthew Kelly, who falsely attributed it to Einstein in what was probably an attempt to make it sound more legitimate. It doesn’t even sound like something he would have said.

The second is common on conservative blogs and decries the fact that Obamacare (or, sometimes, Medicaid) offer free health coverage to undocumented immigrants. In fact, Federal law bars undocumented immigrants from receiving Federal health care services or subsidies for health insurance, with just one exception: Medicaid will pay hospitals to deliver babies of undocumented mothers (children born in the United States are legal US citizens regardless of the status of their parents).

Total time to verify both of these memes on Google: less than thirty seconds.

So why, given how fast and easy it is to verify a meme before reposting it, does nobody ever do it? Why do memes that can be demonstrated to be true in less time than it takes to order a hamburger at McDonald’s still get so much currency?

The answer, I think, is that it doesn’t matter whether a meme is true. It doesn’t matter to the people who post memes and it doesn’t matter to the people who read them. Memes aren’t about communication, at least not communication of facts and ideas. They are about social identity.


Viewed through the lens of social identity, memes suddenly make sense. The folks who spread them aren’t trying to educate, inform, or communicate ideas. Memes are like sigils on a Medieval lord’s banner: they indicate identity and allegiance.

These are all memes I’ve seen online in the last six weeks. What inferences can we make about the people who posted them? These memes speak volumes about the political identities of the people who spread them; their truthfulness doesn’t matter. We can talk about the absurdity of Oprah Winfrey’s reluctance to pay taxes or the huge multinational banks that launder money for the drug cartels, and both of those are conversations worth having…but they aren’t what the memes are about.

It’s tempting to see memes as arguments,especially because they often repeat talking points of arguments. But I submit that’s the wrong way to view them. They may contain an argument, but their purpose is not to try to argue; they are not a collective debate on the merits of a position.

Instead, memes are about identifying the affiliations of the folks who post them. They’re a way of signaling in-group and out-group status. That makes them distinct from, say, the political commentary in Banksy’s graffiti, which I think is more a method of making an argument. Memes are a mechanism for validating social identity. Unlike graffiti, there’s no presupposition the memes will be seen by everyone; instead, they’re seen by the poster’s followers on social media–a self-selecting group likely to already identify with the poster.

Even when they’re ridiculously, hilariously wrong. Consider this meme, for example. It shows a photograph of President Barack Obama receiving a medal from the king of Saudi Arabia.

The image is accurate, thought the caption is not. The photo shows Barack Obama receiving the King Abdul Aziz Order of Merit from King Abdullah. It’s not unconstitutional for those in political office to receive gifts from foreign entities, provided those gifts are not kept personally, but are turned over to the General Services Administration or the National Archives.

But the nuances, like I said, don’t matter. It doesn’t even matter that President George W. Bush received the exact same award while he was in office:

If we interpret memes as a way to distribute facts, the anti-Obama meme is deeply hypocritical, since the political conservatives who spread it aren’t bothered that a President on “their” side received the same award. If we see memes as a way to flag political affiliation, like the handkerchiefs some folks in the BDSM community wear in their pockets to signal their interests, it’s not. By posting it, people are signaling their political in-group.

Memes don’t have to be self-consistent. The same groups that post this meme:

also tend by and large to support employment-at-will policies giving employers the right to fire employees for any reason, including reasons that have nothing to do with on-the-job performance…like, for instance, being gay, or posting things on Facebook the employer doesn’t like.

Memes do more than advertise religious affiliation; they signal social affiliation as well.

Any axis along which a sharp social division exists will, I suspect, generate memes. I also suspect, though I think the phenomenon is probably too new to be sure, that times of greater social partisanship will be marked by wider and more frequent distribution of memes, and issues that create sharper divides will likewise lead to more memes.

There are many ideas that are “identity politics”–ideas that are held not because they’re supported by evidence, but simply because they are a cost of entry to certain groups. These ideas form part of the backbone of a group; they serve as a quick litmus test of whether a person is part of the out-group or the in-group.

For example, many religious conservatives reflexively oppose birth control for women, even if the majority of its members, like the majority of women in the US at large, use it. Liberals reflexively oppose nuclear power, even though it is by far the safest source of power on the basis of lives lost per terawatt hour of electricity produced. The arguments used to support these ideas (“birth control pills cause abortions,” “nuclear waste is too dangerous to deal with”) are almost always empirically, demonstrably false, but that’s irrelevant. These ideas are part of a core set of values that define the group; holding them is about communicating shared values, not about true and false.

Unfortunately, these core identity ideas often lead directly not only to misinformation and a distorted worldview, but to actual human suffering. Opposition to vaccination and genetically modified foods are identity ideas among many liberals; conservatives oppose environmental regulation and deny human involvement in climate change as part of their identity ideas. These ideas have already led to human suffering and death, and are likely to lead to more.

Human beings are social animals capable of abstract reasoning, which perhaps makes it inevitable that abstract ideas are so firmly entrenched in our social structures. Ideas help define our social structures, identify in-group and out-group members, and signal social allegiances. The ideas we present, even when they take the form of arguments, are often not attempts at dialog so much as flags that let others know which lord we march for. Social media memes are, in that way, more accurately seen as house sigils than social discourse.

What my cat teaches me about divine love

This is Beryl.

Beryl is a solid blue Tonkinese cat. He shares a home with (I would say he belongs to, but the reverse may be true) zaiah and I, and spends a good deal of each day perched on my shoulder. I write from home, and whenever I’m writing, there’s a pretty good chance he’s on my shoulder, nuzzling my ear and purring.

He’s a sweetheart–one of the sweetest cats I’ve ever known, and believe me when I say I’ve known a lot of cats.

Whenever we’re in the bedroom, Beryl likes to sit on a pillow atop the tall set of shelves we have on the wall next to the bed. It didn’t take him long to learn that the bed is soft, so rather than climbing down off the top of the shelves, he will often simply leap, legs all outstretched like a flying squirrel’s, onto the bed.

Now, if I wanted to, I could get a sheet of plywood, put it on top of the bed, then put the blanket over top of it. That way, when Beryl leapt off the shelves, he’d be quite astonished to have his worldview abruptly and unpleasantly upended.

But I wouldn’t do that. I wouldn’t do that for two reasons: (1) I love my cat, and (2) it would be an astonishingly dick thing to do.

That brings us to God.

This is a fossil.

More specifically, it’s a fossil of Macrocranion tupaiodon, an extinct early mammal that lived somewhere between 56 and 34 million years ago and went extinct during the Eocene–Oligocene extinction event.

Now, there are very, very few things in this world that conservative Orthodox Jews, Fundamentalist Muslims, and Evangelical Christians will agree on, but one thing that some of these folks do have in common is the notion that fossils like this one do not actually represent the remains of long-vanished animals, because the world is much younger than what such fossils suggest. Most conservative Muslims are more reasonable on this point than their other Abrahamic fellows, though apparently the notion of an earth only a few thousand years old is beginning to take hold in some parts of the Islamic ideosphere.

That presents a challenge; if the world is very young, whence the fossils? And one of the many explanations put forth to answer the conundrum is the idea that these fossils were placed by a trickster God (or, in some versions of the story, allowed by God to be placed by the devil) for the purpose of testing our faith.

And this, I find profoundly weird.

The one other thing all these various religious traditions agree on is God loves us* (*some exclusions and limitations apply; offer valid only for certain select groups and/or certain types of people; offer void for heretics, unbelievers, heathens, idolators, infidels, skeptics, blasphemers, or the faithless).

And I can’t quite wrap my head around the notion of deliberately playing this sort of trick on the folks one loves.

Yes, I could put a sheet of plywood on my bed and cover it with a blanket. But to what possible end? I fear I lack the ability to rightly apprehend what kind of love that would show to my cat.

Which leads me to the inescapable conclusion that a god that would deliberately plant, or allow to be planted, fake evidence contradicting the approved account of creation would be a god that loved mankind rather less than I love my cat.

It seems axiomic to me that loving someone means having their interests and their happiness at heart. Apparently, however, the believers have a rather more unorthodox idea of love. And that is why, I think, one should perhaps not trust this variety of believer who says “I love you.” Invite such a person for dinner, but count the silverware after.

Of Android, iOS, and the Rule of Two Thousand, Part II

In part 1 of this article, I blogged about leaving iOS when I traded my iPhone for an Android-powered HTC Sensation 4G, and how I came to detest Android in spite of its theoretical superiority to iOS and came back to the iPhone.

Part 1 talked about the particular handset I had, the T-Mobile version of the Sensation, a phone with such ill-conceived design, astronomically bad build quality, and poor reliability that at the end of the year I was on my third handset under warranty exchange–every one of which failed in exactly the same way.

Today, in Part 2, I’d like to talk about Android itself.


When I first got my Sensation, it was running Android 2.3, code-named “Gingerbread.” Android 3 “Honeycomb” had been out for quite some time, but it was a build aimed primarily at tablets, not phones. When I got my phone, Android 4 “Ice Cream Sandwich” was in the works, ready to be released shortly.

That led to one of my first frustrations with the Android ecosystem–the shoddy, patchwork way that operating system updates are released.

My phone was promised an update in the second half of 2011. This gradually changed to Q4 2011, then to December 2011, then to January 2012, then to Q1 2012. It was finally released on May 16 of 2012, nearly six months after it had been promised.

And I got off lucky. Many Motorola users bought smart phones just before the arrival of Android 4; their phones came with a written guarantee that an update to Android 4 would be published for their phones. It never happened. To add insult to injury, Motorola released a patch for these phones that locked the bootloader, rendering the phone difficult or impossible to upgrade manually with custom ROMs–so even Android enthusiasts couldn’t upgrade the phones.

Now, this is not necessarily Google’s fault. Google makes the base operating system; it is the responsibility of the individual handset manufacturers to customize it for their phones (which often involves shoveling a lot of crapware and garbage programs onto the phone) and then release it for their hardware. Google has done little to encourage manufacturers to backport Android, nor to get manufacturers to offer a consistent user experience with software updates, instead leaving the device manufacturers free to do pretty much as they choose except actually fork Android themselves…which has led to what developers call “platform fragmentation” and to what Motorola Electrify, Photon and Atrix users call things I shan’t repeat in a blog as family-friendly as this one.

But what of the operating system itself?

Well, it’s a mixed bag of mess.


When I first got my Android phone, I noted how the user interface seemed to have been designed by throwing a box of buttons and dialogs and menus over one’s shoulder and then wired up wherever they hit. System settings were scattered in three different places, without it necessarily being obvious where you might find any particular setting. Functionality was duplicated in different places. The Menu button is a mess; it’s filled with whatever the programmer couldn’t find a better place for, with little thought to good UI design.

Android is built on Linux, an operating system that has a great future on the desktop ahead of it, and always will. The Year of Linux on the Desktop was 2000 was 2002 was 2005 was 2008 was 2009 was 2012 will be 2013. Desktop aside, Linux has been a popular server choice for a very long time, because one thing Linux genuinely has going for it is rock-solid reliability. When I was working in Atlanta, I had a Linux Gentoo server that had accumulated well over two years’ continuous uptime and was shut down only because it needed to be moved.

So it is somewhat consternating that Linux on cell phones seems rather fragile.

So fragile, in fact, that my HTC Sensation would pop up a “New T-Mobile Service Notice” alert every week, reminding me to restart the phone. Even the network operators, it would seem, have little confidence in Android’s stability.

It’s a bit disappointing that the one thing I most like about Linux seems absent from Android. Again, though, this might not be Google’s fault directly; the handset makers and network operators do this to themselves, by taking Android and packaging it up with a bunch of craplets of spotty reliability.

One of the things that it is really, really important to be aware of in the Android ecosystem is the way the money flows. You, as a cell phone owner, are not Google’s customer. Google’s customer is the handset manufacturer. You, as as a cell phone owner, are not the handset manufacturer’s customer. The handset manufacturer’s customer is the network operator. You are the network operator’s customer–but you are not the network operator’s only customer.

Because of this, the handset maker and the network operator will seek additional revenue streams whenever they can. If someone offers HTC money to bundle some crap app on their phones, HTC will do it. If T-Mobile decides it can get more revenue by bundling its own or someone else’s crap app on your phone, it will.

Not only are you not the customer, at some points along the chain–for the purposes of Google ad revenue, say–you are the product being sold. Whenever you hear people talking about “freedom” or “openness” in the Android ecosystem, never forget that.

I sometimes travel outside the US, mainly to Canada these days. When I do that, my phone really, really, really wants me to turn on data roaming.

There are reasons for that. When you roam, especially internationally, the telcos charge rates for data that would make a Mafia loan shark blush. So Android agreeably nudges you to turn on data roaming, and here’s kind of a sticking point…

Even if you’re connected to the Internet via wifi.

It pops up an alert constantly, and by “constantly” I really do mean constantly. Even when you have wifi access, it pops up every time you switch applications, every time you unlock the phone, and about every twenty minutes when you aren’t using the phone.

Just think of it as Google’s way to help the telcos tap your ass that revenue stream.

This multiple-revenue-streams-from-multiple-customers model has implications, not only for the economics of the ecosystem, but for the reliability of your phone as well. And even for the battery life of your phone.

Take HTC phones on T-Mobile (please!). They come shoveled–err, “bundled”–with an astonishing array of crap. HTC’s mediocre Facebook app. HTC Peep, HTC’s much-worse-than-mediocre Twitter client. Slacker Radio, a client for a B-list Internet radio station.

The presence of all the various crapware that comes preloaded on most Android phones, plus the fact that Android apps don’t quit when they lose focus, generally means that a task manager app is a necessary addition to any Android system…which is fine for the computer literate, but less optimal for folks who aren’t so computer savvy.

And it doesn’t always help.

For example, Slacker Radio on my Sensation insists on running all the time at startup, whether I want it to or not:

Killing it with the task manager never works. Within ten minutes after being killed, it somehow respawns, like a zombie in a George Ramero movie, shambling after you no matter how many times you shoot it:

The App Manager in the Android control panel has a function to disable an app entirely, even if it’s set to launch at startup. For reasons I was never able to understand, this did not work with Slacker. It was always there. Always. There. It. Never. Goes. Away. You. Can’t. Hide. From. It.

Speaking of that “disable app” functionality…

Oh, goddamnit, no, I don’t want to turn on data roaming. Speaking of that “disable app” functionality, use it with care! I soon learned that disabling some bundled apps can have…unfortunate consequences.

Like HTC Peep, for instance. It’s the only Twitter client for smartphones I have yet found that is even worse than the official Twitter client for smartphones. It loads a system service at startup (absent from the Task Killer screenshots above because I have the task killer set not to display system services). If you let it, it will download your Twitter feed.

And download your Twitter feed.

And download your Twitter feed. It does not cache any of the Twitter messages you read; every time you start its user interface, it re-downloads the whole thing again. The result, as you might imagine, is eyewatering amounts of data usage. If you aren’t one of the lucky few who still has a truly unmetered data plan, think twice about letting Peep have your Twitter account information!

Oh, and don’t try to disable it in the application control panel. If you do, the phone’s unlock screen doesn’t work any more, as I discovered to my chagrin. Seriously.

The official Twitter app isn’t much better…

…but at least it isn’t necessary to unlock the damn phone.

All this crapware does more than eat memory, devour bandwidth, and slow the phone down. It guzzles battery power, too. One of the default Google apps, Google Maps, also starts a service each time the phone boots up, and man, does it hog the battery juice…even if you don’t use Maps at all. (This screen shot, for instance, was taken at a point in time when I hadn’t touched the Maps app in days.)

You will note the battery is nearly exhausted after only four hours and change. I eventually took to killing the Maps service whenever I restarted the phone, which seems to have improved the HTC’s mediocre battery life without actually affecting Maps when I went to use it.

Another place where Android’s lack of a clear and consistent user interface–

AAAAARGH! NO! NO, YOU PATHETIC FUCKING EXCUSE OF A THING, I DO NOT WANT TO TURN ON DATA ROAMING! THAT’S WHY I SAID ‘NO’ THE LAST 167 TIMES YOU ASKED! SO HELP ME, YOU ASK ME ONE MORE TIME AND I WILL TIP YOU STRAIGHT INTO THE NEAREST EMERGENCY INTELLIGENCE INCINERATOR! @$#%%#@!

Sorry, where was I?

Oh, yes. Another place where Android’s lack of a clear and consistent user interface is its contact management, which is surely one of the more straightforward bits of functionality any smart phone should have.

Android gives you, or perhaps “makes you take responsibility for,” a level of granularity of the inner workings of its contact database that really seems inappropriate.

It makes distinctions between contacts which are stored on your SIM card, contacts which are stored in the Google contact manager (and synced to the Google cloud), and contacts which are stored in other ways. There are, all in all, about half a dozen ways to store contacts–card, Google cloud, T-Mobile cloud, phone memory card. They all look pretty much the same when you’re browsing your contacts, but different ways to store them have different limitations on the type of data that can be stored.

Furthermore, it’s not immediately obvious how and where any particular contact is stored. Things you might think are being synced by Google might not actually be.

And worse, you can’t, as near as I was ever able to tell, export all your contacts at once. Oh, you can export them, all right; Android lets you save them in a .vcf file which you can then bring to another phone or sync with your computer. But you can’t export ALL of them. You have to choose which SET you export: export all the contacts on your SIM card? Export all your Google contacts? Export all your locally-saved-on-the-phone-memory-card contacts?

When I was in getting my second warranty replacement phone, I asked the technician if there was an easy way to take every contact on the phone and save all of them in one export. He said, no, there really isn’t; what he recommended I do was export each group to a different file, then import all those files to my Google contact list, and then finally delete all the duplicates from all the other contact lists.

It worked, but seriously? This is stupid user interface design. It’s a user interface misfeature you might not ever encounter if you always (though luck or choice) save your contacts to the same set, but if for whatever reason you haven’t, God help you.

Yes, I can see why you might want to have separate contact lists, stored and backed up separately. No, that does not excuse the lack of any reasonable way to identify, sort, and merge those contact lists. C’mon, Google engineers, you aren’t even trying.

And speaking of brain-dead user interface design, how about this alert?

What the fuck, Google?

Okay, I get it, I get it. WiFi sharing uses a lot of battery power. The flash uses battery power. Android is just looking out for my best interests, trying to save my battery…

…but don’t all the Fandroids carry on about how much better Android is because it doesn’t force you to do what it thinks is best for you, it lets you decide for yourself? Again I say, what the fuck, Google?


So far, I have complained mostly about the visible bits of Android, the user interface failings and design decisions that demonstrate a lack of any sort of rigorous, cohesive approach to UI design.

Unfortunately, the same problems apply to the internals of Android, too.

One early design decision Google made in the first days of Android concerns the way it handles screen redraws. Google intended for Android to be portable to a wide range of phones, from low-end phones to full-featured smartphones, and so Android does not make use of the same level of GPU acceleration that iOS does. Instead, it uses the CPU to perform many drawing tasks.

This has performance and use implications.

User interface drawing occurs in an application’s main execution thread and is handled primarily by the CPU. (Technically speaking, each element on the screen–buttons, widgets, and so on–is rendered by the CPU, then the GPU handles the compositing.) That means that applications will often block while screen redraws are happening. On HTC Sense, for instance, if you put a clock on the home screen and then you start switching between screens, the clock will freeze for as long as your finger is on the screen.

It also means that things like populating a scrolling list is far slower on Android than it is on iOS, even if the Android device has theoretically better specs. Lists are populated by the CPU, and when you scroll through a list, the entire list is redrawn with each pixel it moves. On iOS, the list is treated as a 2D OpenGL surface; as you scroll through it, the GPU is responsible for updating it. Even on smartphones with fast processors, this sometimes causes noticeable UI sluggishness. Worse, if the CPU is interrupted by something else, like updating a background task or doing a memory garbage collect, the UI freezes for an instant.

Each successive version of Android has accelerated more graphics functions. Android 4 is significantly better than Android 2.3 in this regard. User input can still be blocked during CPU activity, and background tasks still don’t update UI elements while a foreground thread is doing so (I was disappointed to note that in Android 4, the clock still freezes when you swap pages in HTC Sense), but Android 4’s graphics performance is way, way, waaaaaaay better than it was in 2.3.

There are still some limitations, though. Because UI updates occur in the main execution thread, even in Android 4, background tasks can still end up being blocked while UI updates are in effect. This actually means there are some screen captures I wanted to show you, but can’t.


One place where Android falls down compared to iOS is in its built-in touch keyboard. Yes, hardcore geeks prefer physical keyboards, and Android was developed by hardcore geeks, which might be part of the reason Android’s touch keyboard is so lackluster.

One problem I had in Android 2.3 that I really, really hoped Android 4 would fix, and was sad to note that it didn’t, is that occasionally the touch keyboard just simply does not work.

Intermittently, usually once or twice a day, I would bring up an app–the SMS messenger, say, or a notepad, or the IMO IM messenger, and I’d start typing. The phone would buzz on each keypress, the key would flash like it does…but nothing would happen. No text would be entered.

And I’d quit the app, and relaunch it, and everything would be fine. Or it wouldn’t, and I’d quit and relaunch the app again, and if it still wasn’t fine, I’d reboot the phone, and force quit Google Maps in the task manager, and everything would be fine.

I tried very hard to get a screen capture of this, but it turns out the screen capture functionality doesn’t work when your finger is on the touch keyboard. As long as your finger is on the keyboard, the main execution thread is busy drawing, and background functions like screen grabs are blocked.

Speaking of the touch keyboard, there’s one place iOS really shines over Android, and that’s telling where your finger is at on the screen.

That’s harder than it sounds. For one, the part of your finger that first makes contact with the screen might not be where you think it is; it’s not always right in the middle of your finger. For another, when your finger touches the screen, it’s not just a single x,y point that’s being activated. Your finger is big–when you have a high-resolution screen, it’s bigger than you think. A whole lot of area on the touch screen is being activated.

So a lot more deep programming voodoo goes on behind the scenes to figure out where you intended to touch than you might think.

The keys on an iPhone touch keyboard are physically smaller on the screen than they are on an Android screen, and Android screens are often bigger than iOS screens, too. You’d think that would mean it’s easier to type on an Android phone than an iPhone.

And you’d be wrong. I have found, consistently and repeatably, that my typing accuracy is much better on an iPhone than an Android phone, even when the Android phone has a bigger screen and a bigger keyboard. (One of my friends complains that I have fewer hilarious typos and bizarre autocorrects in my text messages now, since I switched back to the iPhone.)

The deep voodoo in iOS appears to be better than the deep voodoo in Android, and yes, I calibrated my touch screen in Android.

Now, you can get third-party keyboards on Android that are much better. The Swiftkey keyboard for Android is awesome, and I love it. It’s a lot more sophisticated than any other keyboard I’ve tried, no question.

But goddamnit, here’s the thing…if you pay hundreds of dollars for a smart phone with a built-in touch keyboard, you shouldn’t HAVE to buy a third-party keyboard to get good results. Yes, they exist, but that does not excuse the pathetic performance of the stock Android keyboard! It’s like saying “Well, this new operating system isn’t very good at loading files, but that’s not a problem because you can buy a third-party file loader.” The user Should. Not. Have. To. Do. This.

And even if you do buy it, you’re still not paying for the amount of R&D that went into it. It’s a losing proposition for the developer AND for the users.


My new iPhone included iOS 6, which feels much more refined than Android on almost every level.

I would be remiss, however, if I didn’t mention what a lot of folks see at the Achille’s heel of iOS: its Maps app.

Early iPhones used Google Maps, a solid piece of work that lacked some basic functionality, such as turn-by-turn directions. When I moved to Android, I wrote about how the Maps app in Android was head, shoulders, torso, and kneecaps above the Maps app in iOS, and it was one of the best things about Android.

And then Android 4 came along.

I don’t know what happened to Maps in Android 4. Maybe it’s just a problem on the Sensation. Maybe it’s an issue where the power manager is changing the processor clock speed and Maps doesn’t notice. I don’t know.

But in Android 4, the cheery synthesized female voice that the turn-by-turn directions used got a little…weird.

I mean, it always was weird; you should hear how it pronounces “Caesar E. Chavez Blvd” (something Maps in iOS 6 pronounces just fine, actually). But it got weirder, in that it would alternate between dragging like a record player (does anyone remember those?) with a bad motor and then suddenly speeding up until it sounded like it was snorting a mixture of helium and crystal meth.

It was a bit disconcerting: “In two hundred feet, turn llllllllllleeeeeeeeeeffffffffftttttttt oooooooooonnnnnnnnn twwwwwwwwwwwwweeeeeeeeeeennnnnnnnttttyyyyyyyy–SECONDAVENUEANDTHENTURNRIGHT!” There was never a rhyme or reason to it; it never happened consistently on certain words or in certain places.

Now, Maps on iOS has been slammed all over Hell and back by the Internetverse. Any mapping program is going to have glitches (Google places a street that a friend of mine lives on about two and a half miles from where it actually is, in the middle of an empty field), but iOS apparently has a whole lot of very silly errors.

I say “apparently” because I haven’t personally encountered any yet, knock on data.

It was perhaps inevitable that Apple should eventually roll their own app (if by “roll their own” you mean “buy map data from Tom Tom”), because Google refused to license turn-by-turn mapping to Apple, so as to create a product differentiation point to make bloggers like me say things like “Wow, Google’s Android Map app sure is better than the one on iOS!” That was a strategy that couldn’t last forever, and Google should have known that, but… *shrug* Whatever. Since Google lost the contract to supply the Maps app to Apple, they took a hit larger than their total Android revenue; if they want to piss it away because they didn’t want Apple to have turn-by-turn directions, I think they really couldn’t have expected anything else.

In part 3 of this thing, I’ll talk about T-Mobile, and how they’re so hopelessly dysfunctional as a telecommunication provider they make the North Korean government look like a model of efficiency.

Some thoughts on post-scarcity societies

One of my favorite writers at the moment is Iain M. Banks. Under that name, he writes science fiction set in a post-scarcity society called the Culture, where he deals with political intrigue and moral issues and technology and society on a scale that almost nobody else has ever tried. (In fact, his novel Use of Weapons is my all-time favorite book, and I’ve written about it at great length here.) Under the name Iain Banks, he writes grim and often depressing novels not related to science fiction, and wins lots of awards.

The Culture novels are interesting to me because they are imagination writ large. Conventional science fiction, whether it’s the cyberpunk dystopia of William Gibson or the bland, banal sterility of (God help us) Star Trek, imagines a world that’s quite recognizable to us….or at least to those of us who are white 20th-century Westerners. (It’s always bugged me that the alien races in Star Trek are not really very alien at all; they are more like conventional middle-class white Americans than even, say, Japanese society is, and way less alien than the Serra do Sol tribe of the Amazon basin.) They imagine a future that’s pretty much the same as the present, only more so; “Bones” McCoy, a physician, talks about how death at the ripe old age of 80 is part of Nature’s plan, as he rides around in a spaceship made by welding plates of steel together.


Image from Wikimedia Commons by Hill – Giuseppe Gerbino

In the Culture, by way of contrast, everything is made by atomic-level nanotech assembly processes. Macroengineering exists on a huge scale, so huge that the majority of the Culture’s citizens by far live on orbitals–artificially constructed habitats encircling a star. (One could live on a planet, of course, in much the way that a modern person could live in a cave if she wanted to; but why?) The largest spacecraft, General Systems Vehicles, have populations that range from the tens of millions ot six billion or more. Virtually limitless sources of energy (something I’m panning to blog about later) and virtually unlimited technical ability to make just about anything from raw atoms means that there is no such thing as scarcity; whatever any person needs, that person can have, immediately and for free. And the definition of “person” goes much further, too; whereas in the Star Trek universe, people are still struggling with the idea that a sentient android might be a person, in the Culture, personhood theory (something else about which I plan to write) is the bedrock upon which all other moral and ethical systems are built. Many of the Culture’s citizens are drones or Minds–non-biological computers, of a sort, that range from about as smart as a human to millions of times smarter. Calling them “computers” really is an injustice; it’s about on par with calling a modern supercomputer a string of counting beads. Spacecraft and orbitals are controlled by vast Minds far in advance of unaugmented human intellect.

I had a dream, a while ago, about the Enterprise from Star Trek encountering a General Systems Vehicle, and the hilarity that ensued when they spoke to each other: “Why, hello, Captain Kirk of the Enterprise! I am the GSV Total Internal Reflection of the Culture. You came here in that? How…remarkably courageous of you!”

And speaking of humans…

The biological people in the Culture are the products of advanced technology just as much as the Minds are. They have been altered in many ways; their immune systems are far more resilient, they have much greater conscious control over their bodies; they have almost unlimited life expectancies; they are almost entirely free of disease and aging. Against this backdrop, the stories of the Culture take place.

Banks has written a quick overview of the Culture, and its technological and moral roots, here. A lot of the Culture novels are, in a sense, morality plays; Banks uses the idea of a post-scarcity society to examine everything from bioethics to social structures to moral values.


In the Culture novel, much of the society is depicted as pretty Utopian. Why wouldn’t it be? There’s no scarcity, no starvation, no lack of resources or space. Because of that, there’s little need for conflict; there’s neither land nor resources to fight over. There’s very little need for struggle of any kind; anyone who wants nothing but idle luxury can have it.

For that reason, most of the Culture novels concern themselves with Contact, that part of the Culture which is involved with alien, non-Culture civilizations; and especially with Special Circumstances, that part of Contact whose dealings with other civilizations extends into the realm of covert manipulation, subterfuge, and dirty tricks.

Of which there are many, as the Culture isn’t the only technologically sophisticated player on the scene.

But I wonder…would a post-scarcity society necessarily be Utopian?

Banks makes a case, and I think a good one, for the notion that a society’s moral values depend to a great extent on its wealth and the difficulty, or lack thereof, of its existence. Certainly, there are parallels in human history. I have heard it argued, for example, that societies from harsh desert climates produce harsh moral codes, which is why we see commandments in Leviticus detailing at great length and with an almost maniacal glee who to stone, when to stone them, and where to splash their blood after you’ve stoned them. As societies become more civil more wealthy, as every day becomes less of a struggle to survive, those moral values soften. Today, even the most die-hard of evangelical “execute all the gays” Biblical literalist rarely speaks out in favor of stoning women who are not virgins on their wedding night, or executing people for picking up a bundle of sticks on the Sabbath, or dealing with the crime of rape by putting to death both the rapist and the victim.

I’ve even seen it argued that as civilizations become more prosperous, their moral values must become less harsh. In a small nomadic desert tribe, someone who isn’t a team player threatens the lives of the entire tribe. In a large, complex, pluralistic society, someone who is too xenophobic, too zealous in his desire to kill anyone not like himself, threatens the peace, prosperity, and economic competitiveness of the society. The United States might be something of an aberration in this regard, as we are both the wealthiest and also the most totalitarian of the Western countries, but in the overall scope of human history we’re still remarkably progressive. (We are becoming less so, turning more xenophobic and rabidly religious as our economic and military power wane; I’m not sure that the one is directly the cause of the other but those two things definitely seem to be related.)

In the Culture novels, Banks imagines this trend as a straight line going onward; as societies become post-scarcity, they tend to become tolerant, peaceful, and Utopian to an extreme that we would find almost incomprehensible, Special Circumstances aside. There are tiny microsocieties within the Culture that are harsh and murderously intolerant, such as the Eaters in the novel Consider Phlebas, but they are also not post-scarcity; the Eaters have created a tiny society in which they have very little and every day is a struggle for survival.


We don’t have any models of post-scarcity societies to look at, so it’s hard to do anything beyond conjecture. But we do have examples of societies that had little in the way of competition, that had rich resources and no aggressive neighbors to contend with, and had very high standards of living for the time in which they existed that included lots of leisure time and few immediate threats to their survival.

One such society might be the Aztec empire, which spread through the central parts of modern-day Mexico during the 14th century. The Aztecs were technologically sophisticated and built a sprawling empire based on a combination of trade, military might, and tribute.

Because they required conquered people to pay vast sums of tribute, the Aztecs themselves were wealthy and comfortable. Though they were not industrialized, they lacked for little. Even commoners had what was for the time a high standard of living.

And yet, they were about the furthest thing from Utopian it’s possible to imagine.

The religious traditions of the Aztecs were bloodthirsty in the extreme. So voracious was their appetite for human sacrifices that they would sometimes conquer neighbors just to capture a steady stream of sacrificial victims. Commoners could make money by selling their daughters for sacrifice. Aztec records document tens of thousands of sacrifices just for the dedication of a single temple.

So they wanted for little, had no external threats, had a safe and secure civilization with a stable, thriving economy…and they turned monstrous, with a contempt for human life and a complete disregard for human value that would have made Pol Pot blush. Clearly, complex, secure, stable societies don’t always move toward moral systems that value human life, tolerate diversity, and promote individual dignity and autonomy. In fact, the Aztecs, as they became stronger, more secure, and more stable, seemed to become more bloodthirsty, not less. So why is that? What does that say about hypothetical societies that really are post-scarcity?

One possibility is that where there is no conflict, people feel a need to create it. The Aztecs fought ritual wars, called “flower wars,” with some of their neighbors–wars not over resources or land, but whose purpose was to supply humans for sacrifice.

Now, flower wars might have had a prosaic function not directly connected with religious human sacrifice, of course. Many societies use warfare as a means of disposing of populations of surplus men, who can otherwise lead to social and political unrest. In a civilization that has virtually unlimited space, that’s not a problem; in societies which are geographically bounded, it is. (Even for modern, industrialized nations.)

Still, religion unquestionably played a part. The Aztecs were bloodthirsty at least to some degree because they practiced a bloodthirsty religion, and vice versa. This, I think, indicates that a society’s moral values don’t spring entirely from what is most conducive to that society’s survival. While the things that a society must do in order to survive, and the factors that are most valuable to a society’s functioning at whatever level it finds itself, will affect that society’s religious beliefs (and those beliefs will change to some extent as the needs of the society change), there would seem to be at least some corner of a society’s moral structures that are entirely irrational and completely divorced from what would best serve that society. The Aztecs may be an extreme example of this.

So what does that mean to a post-scarcity society?

It means that a post-scarcity society, even though it has no need of war or conflict, may still have both war and conflict, despite the fact that they serve no rational role. There is no guarantee that a post-scarcity society necessarily must be a rationalist society; while reaching the point of post scarcity does require rationality, at least in the scientific and technological arts, there’s not necessarily any compelling reason to assume that a society that has reached that point must stay rational.

And a post=scarcity society that enshrines irrational beliefs, and has contempt for the value of human life, would be a very scary thing indeed. Imagine a society of limitless wealth and technological prowess that has a morality based on a literalistic interpretation of Leviticus, for instance, in which women really are stoned to death if they aren’t virgins on their wedding night. There wouldn’t necessarily be any compelling reason for a post-scarcity society not to adopt such beliefs; after all, human beings are a renewable resource too, so it would cost the society little to treat its members with indifference.

As much as I love the Culture (and the idea of post-scarcity society in general), I don’t think it’s a given that they would be Utopian.

Perhaps as we continue to advance technologically, we will continue to domesticate ourselves, so that the idea of being pointlessly cruel and warlike would seem quite horrifying to our descendants who reach that point. But if I were asked to make a bet on it, I’m not entirely sure which way I’d bet.

From iPhone to Android

A few weeks back, I decided I needed to replace my aging iPhone 3G.

I got the 3G when it first came out. My roommate at the time and I spent quite a while waiting in line in front of the Apple store, only to be told when we were two places from the door that the stock for the day had been sold out. t took several more days of waiting in line before we were able to get our hands on one.

The iPhone 3G was the first smartphone I’d ever owned. I’ve been a cell phone user for quite some time, since the days of giant handsets with one-line LED displays, but I’d never owned anything even remotely approaching a smartphone before. For me, the iPhone was a game-changer. I have a notoriously bad sense of direction–it is not impossible for me to get lost just a few blocks from my home–and the GPS feature alone in the iPhone was a huge improvement in my quality of life.

Having real Web access was also a big deal. I do a lot of IT work, and the ability to get a call from a client and check the client’s Web site right there on the spot even if I’m not in front of a computer is huge.

But over the past few months, the 3G hasn’t been cutting it for me. The GPS is getting a little wonky, and the battery isn’t holding a very good charge any more, and the iOS 4.2 update made the phone feel a bit sluggish. On top of that, the amount that AT&T was charging me every month was enough to give me a nosebleed.

I spent a few weeks looking at several options: upgrading to an iPhone 4 and staying with AT&T, upgrading to an iPhone 4 and jumping to Verizon, and getting an Android phone.

Then Google announced the open hardware development kit for Android, and that significantly tilted the balance toward Android. The Google hardware kit for Android phones is based on the Arduino prototyping board, which I already have experience developing and programming for.

I went into T-Mobile and found that I could save quite a lot of money every month with a contract from them if I went to Android, so that’s what I did.


The phone I got and will be talking about here is the HTC Sensation 4G, running Android 2.3. It’s been an interesting, and at times rough, transition. I’ve been surprised by a number of things about Android, both pleasantly and unpleasantly.

But before I get into that, let me talk about what Android isn’t.

OPEN: IT’S THE NEW CLOSED

Android isn’t a religion. To hear many folks talk about it online, you’d think that the choice of cell phone operating systems was a religious or philosophical choice. Android, we’re told gravely, is “open.” The iPhone operating system is “closed.” To use Android is to celebrate freedom and democracy and other wonderful things; to use an iPhone is to toil under tyranny and totalitarian rule.

It’s hooey, of course. Android isn’t open, at least not in the way the religious folks say it is.

Oh, it’s open in the sense that the source code is available, kind of, eventually, when Google says it is. This sort of freedom isn’t really equal, though; Google decides who gets it when, and which partners get to have it first.

But the thing to remember is that from the perspective of the folks who make cell phone software, you aren’t the customer. The handset makers are the customer. Android is open–for them. You, as the person who buys the cell phone, get exactly as much freedom and openness as the handset maker lets you have.

On my HTC Sensation, for instance, the cell phone bootloader is locked down tighter than a nun’s–ahem. It was possible, if I wanted to, for me to jailbreak my iPhone. My Sensation? Nope, no can do. Not even the Cyanogen team has figured out how to root it yet.

The same is true for some other Android phones as well. Supposedly, HTC has had a change of heart and will be unlocking its phones in the future. It’s not clear whether this will apply to me; I’ve read one article online that says all HTC phones will be unlocked, and another that says only HTC phones not tied to a particular network or under contract with a particular carrier will be unlocked.

On the iPhone, the fact that I could, if I chose, jailbreak my phone never mattered to me; I never saw any good reason to. With Android, the fact that I can’t jailbreak it is kind of a bother, and that brings me to the second issue with Android.

SON OF THE REVENGE OF CRAPWARE: IT CAME FROM BEYOND THE GRAVE

With Android, we’re told, there is more openness in software, too. Android programmers do not have to go through any particular approval process to get their apps on your phone. The iPhone App Store is tightly regulated; apps that Apple doesn’t like aren’t available. The Android app store is an open bazaar; anyone can make any sort of app at all.

That’s not 100% true. The carriers have coerced Google into removing apps they didn’t like from the Google app store.

More to the point, though, the openness is really more for the handset maker’s benefit than for yours. With Android, we are back to the bad old days of Windows XP and Windows Vista, where each computer maker tended to stuff their computers so full of demos and third-party software and their own support applications that the term Craplets (crap applets) was devised to describe them.

Most computer manufacturers came to their senses, eventually, and cut it out. It didn’t help that some of this crapware, like HP’s support application that they bundled onto their computers, contained security vulnerabilities that let hackers pwn HP computers.

But Android phones often come so stuffed with pre-bundled crapware that, in my case at least, nearly half the available application memory is occupied right off the bat. Worse, unlike desktop crapware, the Android crapware can’t be removed without jailbreaking the phone. I’ll talk about some of that crapware in a bit.

So my experience with Android has been interesting. In the rest of this post, I’ll run down the differences I’ve found between using an Android phone and using an iOS phone, and rate the quality of everything from the handset design to the apps to the user interface. If that sounds like your thing, click here to read more!

Some Thoughts On Being Amazing

There’s a graphic floating around on the Internet right now that’s kind of bugging me.

It’s a pretty enough image, don’t get me wrong. It shows a beautiful woman standing in the falling snow, with words over it. The words are all spelled correctly, there’s no extraneous “Warning, the letter S is approaching!” apostrophe where there shouldn’t be one (the prevalence of which in common use is itself an ongoing source of annoyance to your humble scribe), and it uses a lovely script font. I’m not going to bother to re-post it here, but overall it’s not a badly done bit of Photoshop.

What bugs me is what the words say. They, read, in that lovely script font:

If She’s Amazing, She’s Not Easy.
If She’s Easy, She’s Not Amazing.

And it pisses me right the fuck off.

Now, I don’t know if they mean “easy” as in “sexually promiscuous” or “easy” as in “easy to get close to.” It doesn’t really matter; both readings are pretty odious.

On the surface, I can kinda see what the artist intended, sorta, maybe. He or she was probably driving at a point that, in all fairness, is reasonable; if you think a person is amazing, you should be willing to invest in her (or him), and not necessarily to expect that a relationship will come easily or without effort. To some extent, it’s a fair point; things worth having are worth working for.

But regardless of whether or not the unknown artist intended to make that point, I don’t think it’s the point that is actually being made.

If She’s Amazing, She’s Not Easy.
If She’s Easy, She’s Not Amazing.

Taken on its most superficial level–that is, with “easy” meaning “sexually promiscuous”–it’s simply old-fashioned, sex-negative slut-shaming of the most boring and tedious sort. I’ve met some folks who are sexually “easy,” at least for the right partners, who are pretty bloody amazing, thank you very much–smart, educated, driven, successful, literate, happy, fulfilled, insightful, incisive, and on at least one occasion even quite skilled at spinning fire. To suggest that a woman’s amazingness varies directly with how tightly she keeps her legs closed is misogynistic, sure, but it’s such a banal, humdrum sort of misogyny it’s scarcely even worth talking about. Either the essential stupidity of such an attitude is glaringly self-obvious to someone, or it’s entirely inaccessible to him. Either way, it’s so lacking in subtlety or depth that it’s not even interesting.

And it doesn’t even exaggerate misogyny to the point that it becomes social commentary, making misogyny a target of sarcastic ridicule the way this graphic does1.

But I am willing to give the person who created it the benefit of the doubt, and assume that such a blatant reading of sex-negative claptrap isn’t what was intended.

I think, though I could be wrong, that rather than trying to be patriarchal and sexist, the person who created the image was trying to say “An amazing woman won’t be easy to get close to, so one should be prepared to put in the work; a woman who is easy to get close to isn’t going to be nearly as amazing.”

And even that reading is pretty fucked up, if you ask me.

If She’s Amazing, She’s Not Easy.
If She’s Easy, She’s Not Amazing.

The first thing I thought when i read this was, “easy to who?” A person who is amazing might very well be easy to get to know and to become close to, if she finds you to be amazing as well. On the surface, there seems to be a very deeply buried, tacit subtext of “I’m not terribly amazing myself, so it sure would be hard for me to get the attention of someone who is.”

And hell, sometimes being a person who takes risks, who engages the world, who is open and transparent, who is willing to run the risk of living a life unencumbered by a fortress of walls and defenses, is part of what makes a person amazing. Even my pet kitten, who lives in a world that is filled with joy and for whom every new person is a friend, knows that.

The flip side, the idea that a person who is easy to get close to won’t be amazing, is not only absurd, it’s a slap in the face to those who are amazing and who choose to live their lives openly and without fear. Writing off a person as not being sufficiently “amazing” merely because that person is easy to engage seems to me to be profoundly short-sighted.

There’s a deeper, more sinister kind of yuck buried in the sentiment as well.

If She’s Amazing, She’s Not Easy.
If She’s Easy, She’s Not Amazing.

Tucked neatly beneath the surface of this sentiment is an underlying assumption: that it is her job, as an amazing woman, not to be easy, and it is your job, and the person who is attracted to amazing women, to work to pierce that wall.

Yep, it’s the same thing we see in Chanel ads and swing clubs and women’s magazines at the grocery checkout: women are the gatekeepers, men are the pursuers. She is amazing, and her role is to make pursuit of her hard; you are the schleb who wants her, and it is your role to pursue her until you wear down her resistance. Don’t settle for second-best! Don’t take the woman who’s easy to catch! She won’t be as amazing as the woman who is.

And that kind of gender-stereotypical rolecasting is, if anything, even more corrosive than the simpler, more boring kind of misogyny in the first reading. The fact that the elegantly-dressed woman in the photo, standing out in the snow in her expensive cocktail dress, was conventionally pretty in the bland sort of Vogue-esque kind of way, sort of underscores that point a bit.

At least I think so, anyway. But then, I seem to have a statistically disproportionate number of amazing people around me, so perhaps I’m just jaded.


1 At least, I assume the Cinderella image is intended to mock misogyny. It certainly feels like social-commentary-through-comedic-exaggeration to me.

You knew it was coming: Watchmen

I have always had a very…special relationship with the Watchmen story.

I was first introduced to the story by Tracey Summerall, a woman who at the time was attending college in Sarasota, Florida. She also introduced me to the Terry Gilliam movie Brazil, among other cult classics, so as you might imagine this had a significant effect on my grasp of pop culture. (In fact, she had a map of the world up on the wall with the Watchmen comics carefully pinned against it, one issue directly over the country of Brazil, a juxtaposition that was not accidental.)


Tracey was my first crush, a fact which eventually led to the demise of our friendship. At that point, I was still young enough I hadn’t yet learned some of the most basic and obvious but nevertheless still not easy tools of interpersonal relationships, among them “more communication is better than less communication,” “if you don’t ask for what you want you can not reasonably expect to have what you want,” and “other people are not responsible for your unvoiced expectations.” In fact, my friendship with her was in many ways instrumental to my learning these things, and she is among the ten or so people who have most influenced the person I later became, though she never knew that, and those lessons came too late to save our friendship. (Funny how that can happen. As it turns out, I learned more than a decade later that she went into exactly the same line of work I went into–when she won a prominent design award in the industry. But I digress.)

Anyway, she introduced me to Watchmen, which at the time I thought was the most brilliant and amazing thing I’d ever seen. It wasn’t finished yet; only six of the twelve episodes that made up the full story had been published, and the rest were delayed by nearly two years.

At the time, I lived in Ft. Myers, an hour and a half drive from Sarasota, and the only place in southwest Florida that carried Watchmen was in Sarasota. And they refused to say over the phone whether or not the next issues were available yet. So I’d get in my car each month and make the drive, and as often as not the next part was delayed by something or other and wouldn’t be there. It took, all in all, about a year and change for me to be able to read the whole story.

I still have the first edition, first printing comic book version of Watchmen. Not exactly in pristine shape, but that’s not really the point; I’m neither a collector nor a fan of comic books (and to this day Watchmen is one of only three graphic novels I’ve ever read).

So it’s fair to say that I went into the movie with some high expectations. Watchmen is rooted in a significant part of my personal history, and I have some attachments to it that no movie could reasonably ever be expected to live up to.


I’ve seen the movie twice now. The first time, I went to see it by myself; I gathered up all of my expectations and hopes and bittersweet memories and dragged them all down to the theater with me to see if what was up on the screen could do justice to my past.

When I got home, I posted on Twitter, “Back from Watchmen. Haven’t read it in years. I’d forgotten how brutal it was. Movie is good, but not brilliant.” and went to bed.

Before I went to see it, I didn’t read any of the critical reviews or commentary about the movie, and that was deliberate. Since then, of course, I’ve read a lot of reviews and endless commentary about the movie; Watchmen is, if nothing else, the most talked-about film to come along in a long time.

Some of that commentary makes sense, even if I don’t happen to agree with it. Some of it makes me shake my head and say “What?”

There’s a lot of that going around. To be fair, the movie studio didn’t really seem to have a grasp of what they were dealing with; according to several “behind the scenes” and “making of” articles I’ve read, what they wanted was a two-hour, PG-rated movie that could be the start of a whole new franchise.

What they ended up with, of course, is a sprawling, self-contained three-hour movie that barely avoided an NC-17 rating.

And really, it couldn’t be any other way. Seriously. What were they thinking? Any executive who thought Watchmen could be the next X-Men franchise clearly didn’t understand the story. Watchmen isn’t really a superhero story; it’s a brutal, ugly, and morally gray morality play, filled with characters who are at best deeply flawed and at worst are morally reprehensible. The main character is a sociopath, for God’s sake! In one of the film’s more graphic scenes, one superhero beats and attempts to rape another superhero. (Actor Jeffrey Morgan, who plays the superhero The Comedian, describes that particular scene as “three of the hardest days of filming I have ever had to do.”)

What did they think, that they’d be able to release Watchmen Origins: Rorschach a few years from now? What we learn about Rorschach’s past in Watchmen is exactly enough, kthx; anything more would be trawling through a sewer in a glass-bottom boat. The studio should be content with the merchandising tie-ins they’ve already done (“the Comedian deluxe collector figure comes with accessories and multiple guns,” the better to shoot pregnant women with) and be done with it.


One of the complaints I’ve heard that makes sense is about the soundtrack. That complaint I have to agree with; the soundtrack for the film is jarring and in some places incongruous. I understand why the choices were made; I understand what the intent was; I understand that part of the goal of the soundtrack was to ground the film in a particular time, and more importantly, in a particular psychological environment. The choices that were made are logical, but I think were wrong; the audience members who are familiar with these songs are going to bring their own associations to them, and they may not be the associations that were intended. (That was definitely true in several cases for me.)

One of the complaints I’ve heard that doesn’t make sense is that the pacing of the film was wrong.

Watchmen is not a superhero movie. It is a deconstruction of superhero movies. It is a reaction against the comics of the 60s and 70s, that were forced by industry standards to conform to the Comics Code Authority‘s inane Comic Book Code, which required, among other things, that in all comic books “If crime is depicted it shall be as a sordid and unpleasant activity,” “Criminals shall not be presented so as to be rendered glamorous or to occupy a position which creates a desire for emulation,” and perhaps most stupidly, “In every instance good shall triumph over evil and the criminal punished for his misdeeds.”

The superhero movies we’re familiar with–X-Men, Spider-Man, Batman–are all based on stories that are products of this code. The code has shaped what we expect from a superhero movie, and I don’t mean just in ways like “superheros don’t commit rape” and “superheros don’t shoot pregnant women.” We expect a certain style of storytelling, with epic battles and chases and exciting music. We expect noble deeds, good, evil, tension, climax, resolution.

That isn’t what Watchmen offers.

What Watchmen offers is the notion that our expectations are stupid, uninformed, and fucked-up from the start. What Watchmen offers is the observation that putting on a mask and beating up bad guys is a pretty fucked-up thing to do, and the fucked-up people who do this fucked-up thing are not likely to be noble in character. What Watchmen offers is the idea that life isn’t neatly divided along lines of good and evil; people are people, and often they’re fucked-up, and people do stuff–some of which is noble and some of which descends to atrocity.

And sometimes some of the stuff that people do is both at the same time, and sometimes it’s neither, and sometimes people just plain don’t give a fuck, and if that makes you uncomfortable, then that’s too bad. Against the backdrop of war and civil unrest and the possibility of nuclear annihilation, sometimes it really doesn’t matter whether you’re beating up purse snatchers; it’s all just rearranging deck chairs on the Titanic.

For people who walk into Watchmen expecting a superhero flick, there’s likely to be some grumbling. It’s not, even though it’s filled with folks in masks who beat up bad guys. Better, I think, to walk in expecting a mystery. We know how to deal with mystery movies; we expect a slower, more measured pace. We’re not looking for chase scenes and things blowing up. Though even that isn’t quite right; Watchmen starts out with a straight-ahead murder mystery, but in this story, context and subtext are everything.

And how, exactly, do we cope with the superhero who sees all of humanity as a kind of extended lifeboat dilemma and makes the obvious, logical, necessary, and thoroughly evil choice? The story dares us: Are your moral values as resolute as Rorschach’s, the sociopath who has nothing but contempt for human life yet is willing to die for the things he believes to be morally right? “No. Not even in the face of Armageddon. Never compromise,” he says. Would you? Or would you choose to become complicit in atrocity?

One of the reviews of Watchmen I’ve read refers to one of the characters as a “supervillain,” but is he? I don’t believe that he is, and more to the point, by labeling him as a supervillain I think the reviewer missed the entire point of the morality play. One of the unintended side-effects of the Comic Book Code is that it has left pop culture littered with superheros who are incapable of making complex moral decisions, because they’ve never had to.


One thing I felt as I was watching the movie was a sense of disconnect from the emotional impact of the story. When I read the comic-book version, I recall feeling profoundly affected by it on an emotional level, and the emotional response I had to the story became so tangled up in the emotional landscape of all the things going on in my life at that time that now, more than twenty years later, they’re still difficult for me to unpack from each other.

The movie, which in many ways was faithful to the comic to the point of obsession, felt detached to me. The set design, the direction, the costumes, the settings, were all pitch-perfect, but somehow the movie lacked the immediate emotional resonance of the book for me. That might be in part because of my own familiarity with the story, or because the story belongs to a part of my life that is so distant that the person I was then is almost alien and incomprehensible to the person I am now, or because there’s just no way any re-interpretation of the story could ever match the impact of my first exposure to it. I’ve talked to people who didn’t read it first, and they don’t seem to find the movie as flat as I do, so I don’t know.

It is interesting to me how the limbic system can remain static for decades. In almost every way that’s relevant, I am not even remotely related to the person I was in 1986, to the point where I have trouble even understanding the person I used to be, yet the emotional reality of that person is still as clear and present as if it had happened yesterday. This sort of lizard-brain stickiness contributes, I think, to a great deal of human misery; we remember the emotions surrounding things long after the things themselves have faded, and as a result our recollections of people who have been important to us are stained by those emotions and become frozen like flies in amber. We remember arguments that passed a decade ago as clearly as if the door was still slamming, long after we have forgotten the things that drew us to the people around us in the first place. But again, I digress.

Watchmen is not a story that meets with the Comics Code Authority’s approval. It’s brooding and dark and morally gray, and the end of the story leaves the audience stranded in a moral quagmire with no way out. This is not your father’s tale of heros and villains. “In every instance good shall triumph over evil and the criminal punished for his misdeeds”? In Watchmen, we’re left not really sure who is good and who is evil, if indeed those terms are even meaningful at all.

Watching a conventional superhero movie like Spider-Man or X-Men in the theater is a very different experience than seeing Watchmen; with movies like X-Men, you eat popcorn and the folks around you cheer and you leave the movie feeling excited and happy. There are moments of that in Watchmen, to be sure (both times I saw it, the audience cheered at Rorschach’s “None of you understand. I’m not locked up in here with you; you’re locked up in here with me!”) but Watchmen is the only movie involving superheroes I’ve ever seen where the audience reaction to the story’s final, climactic confrontation is stunned silence (or, the first time I saw it, someone crying). Complicity in atrocity comes easily, and the movie makes us complicit and then twists the knife.

That last confrontation did keep its emotional impact for me. In the end, the technical changes that were made to the storyline, the condensation of the background material, all that stuff doesn’t really matter, though I’m sure hard-core comic geeks will keep using these things in online dicksizing contests for generations to come.

What matters is that the movie achieved the objective of the comic. And in that, I’m by no small measure impressed. Is it a brilliant movie? No, it’s not; but it’s a brilliant story. And that’s what counts.

Linguistic musings

Axes seem to hold a special place in the collective consciousness of English speakers. Why is it, exactly, that we speak of axe murderers (usually in the context of “I’m not an…”), but we don’t attach the weapon of choice to the descriptions of other murderers? One never speaks of a knife murderer, or a gun murderer, or a blunt-instrument murderer…

Random musing of the day

Men and women both enjoy looking at magazines filled with photographs of scantily-clad women in sexually suggestive poses. Men want to fuck the models in the photographs; women want to be the models in the photographs.

Discuss.