Astonishing Beauty

The world around us is fractally beautiful. Not only is it filled with the most extraordinary, breathtaking beauty, but that beauty exists no matter what level you set your gaze upon. At any scale, at any magnification, beauty persists.

Look at a flower.

It’s beautiful–the colors, the symmetry, the shape. These things are all pleasing in their own right.

But look closer. Much, much closer. What will you find? An enormous array of tiny cells, in a proliferation of shapes and functions, each working with the ones around it to give the flower its form and color, all of them filled with activity. Inside every cell, an array of bogglingly complex molecular machines, running all the time, consuming energy, producing still more molecular machines, and always, always striving to survive and make more of themselves.

Now look up, from the microscopic to the macroscopic.


Image: NASA/Hubble Space Telescope team

This is NGC 2818, a magnificent planetary nebula in the southern constellation Pyxis. This and other planetary nebulae are the remnants of violent explosions, the result of a star that has fused all its available hydrogen fuel and is no longer able to support itself against gravity. In the last few seconds of the star’s life, it explodes, leaving behind a glowing ember called a white dwarf and throwing off a shockwave of expanding gas.

These stellar remnants are beautiful, but like that flower, they are fractally beautiful. In fact, they are connected with that flower. Most of the elements necessary for life, all the molecules with an atomic weight greater than iron, are forged in these fiery explosions, when the unimaginable forces of a nova or supernova fuse lighter elements into heavier ones. The atoms in this flower, and in you and me, were birthed in fire and sent out into the universe, to eventually coalesce into this sun, this solar system, this planet, at this place and this time, and became us and kittens and chocolate and motorcycles and ice cream sundaes.

The universe is both incomprehensibly huge and incomprehensibly fine-grained, and it’s beauty all the way down.

Even when we look at the same scale over time, we see beauty. Beauty is enduring. It emerges, over and over again, wherever there is the possibility of change.

Indeed, there is quite literally more beauty around us than we are capable of seeing. White flowers are richly colored, to eyes that can see in ultraviolet. The sky above our heads is a tapestry whose richness we could not recognize until we built machines to augment our feeble vision.

But it isn’t just the grandeur of the natural world. Beauty lurks in every corner. It hides in a tumbler filled with colored glass stones on a restaurant table.

Color is a myth, of course. It’s a perceptual invention, created by the sorting of light of different frequencies into neural impulses by our visual system, with sensors tuned to respond best to different wavelengths of light. It’s a crude approximation of the diversity of photons filling the air around us. These photons chart extraordinarily complex paths through the tumbler, reflecting and refracting, sometimes being absorbed or scattered, and we glance at this intricate mathematical dance of physics for a moment and then look away.

The complexity and beauty of the physical world is both breathtaking and ordinary. Breathtaking because it exists on scales we can scarcely begin to understand; ordinary because it surrounds us all the time, beauty so abundant we forget it’s even there.

Every moment of our lives is spent in a world so beautiful, so incredibly filled with marvels, that we are blessed with abundance beyond measure. I can not help but feel that, should we become more mindful of it, the dull and ugly parts of the world will lift, just a bit. And perhaps, just perhaps, we will be that much less inclined to manufacture more of that dullness and ugliness.

We are here for only a brief time. Let us never forget how beautiful it is to be so privileged to exist in this place.

Oh, Joss: “Morality doesn’t exist without the fear of death”

A couple of years ago, during a lackadaisical time in my life when I was only running two businesses and wasn’t on tour to support a book I’d just coauthored, I sat down with my sweetie Zaiah and we watched all the episodes of the Joss Whedon television show Dollhouse over the course of a week or so.

The premise of the show, which isn’t really important to what I want to write about, concerns a technology that allows personalities, identities, and skills to be constructed in a computer (much as one might write a computer program) and then implanted in a person’s brain, such that that person takes on that identity and personality and has those skills. The television show followed a company that rented out custom-designed people, constructed in a bespoke fashion for clients’ jobs and then erased once those jobs were over. Need a master assassin, a perfect lover, a simulation of your dead wife, a jewel thief? No problem! Rent that exact person by the hour!

Anyway, in Episode 10 of the short-lived series, one of the characters objects to the idea of using personality transplants as a kind of immortality, telling another character, “morality doesn’t exist without the fear of death.” I cringed when I heard it.

And that’s the bit I want to talk about.


The New York Times has an article about research which purports to show that when reminded of their own mortality, people tend to cling to their ethical and moral values tightly. The article hypothesizes,

Researchers see in these findings implications that go far beyond the psychology of moralistic judgments. They propose a sweeping theory that gives the fear of death a central and often unsuspected role in psychological life. The theory holds, for instance, that a culture’s very concept of reality, its model of “the good life,” and its moral codes are all intended to protect people from the terror of death.

This seems plausible to me. Religious value systems–indeed, religions in general–provide a powerful defense against the fear of death. I remember when I first came nose to nose with the idea of my own mortality back when I was 12 or 13, how the knowledge that one day I would die filled me with stark terror, and how comforting religion was in protecting me from it. Now that I no longer have religious belief, the knowledge of the Void is a regular part of my psychological landscape. There is literally not a day that goes by I am not aware of my own mortality.

But the idea that fear of death reminds people of their values, and causes them to cling more tightly to them, doesn’t show that there are no values without the fear of death.

As near as I can understand it, the statement “morality doesn’t exist without the fear of death” appears to be saying that without fear of punishment, we can’t be moral. (I’m inferring here that the fear of death is actually the fear of some kind of divine judgment post-death, which seems plausible given the full context of the statement: “That’s the beginning of the end. Life everlasting. It’s…it’s the ultimate quest. Christianity, most religion, morality….doesn’t exist, without the fear of death.”) This is a popular idea among some theists, but does it hold water?

The notion that there is no morality without the fear of death seems to me to rest on two foundational premises:

1. Morality is extrinsic, not intrinsic. It is given to us by an outside authority; without that outside authority, no human-derived idea about morality, no human-conceived set of values is any better than any other.

2. We behave in accordance with moral strictures because we fear being punished if we do not.

Premise 1 is a very common one. “There is no morality without God” is a notion those of us who aren’t religious never cease to be tired of hearing. There are a number of significant problems with this idea (whose God? Which set of moral values? What if those moral values–“thou shalt not suffer a witch to live,” say, or “if a man also lie with mankind, as he lieth with a woman, both of them have committed an abomination: they shall surely be put to death,” or “whatsoever hath no fins nor scales in the waters, that shall be an abomination unto you”–cause you to behave reprehensibly to other people? What is the purpose of morality, if not to tell us how to be more excellent to one another rather than less?), but its chief difficulty lies in what it says about the nature of humankind.

It says that we are not capable of moral action, or even of recognizing moral values, on our own; we must be given morals from an outside authority, which becomes the definition of morality. I have spoken to self-identified Christians who say that without religion, nothing would prevent them from committing rape and murder at will; it is only the strictures of their religion that prevent them from doing so. I have spoken to self-identified Christians who say if they believed the Bible commanded them to murder children or shoot people from a clock tower, they would do it. (There is, unsurprisingly, considerable overlap between these two sets of self-identified Christian.) If it takes the edict of an outside force to tell you why it’s wrong to steal or rape or kill, I am unlikely to trust you with my silverware, much less my life. Folks who say either of these things seldom get invited back to my house.

The notion that the fear of death is a necessary component of moral behavior because without punishment, we will not be moral is, if anything, even more problematic. If the only thing making you behave morally is fear of punishment, I submit you’re not actually a moral person at all, no matter which rules of moral behavior you follow.

Morality properly flows from empathy, from compassion, from the recognition that other people are just as real as you are and just as worthy of dignity and respect. Reducing morality to a list of edicts we’ll be punished if we disobey means there is no need for empathy, compassion, charity, or respect–we aren’t moral people by exercising these traits, we’re moral by following the list of rules. If the list of rules tells us to stone gays, then by God, that’s what we’ll do.

An argument I hear all the time (and in these kinds of conversations, I do mean all the time) is “well, if there’s no God and no fear of Hell, who’s to say the Nazis were wrong in what they did?” It boggles me every single time I hear it. I cannot rightly apprehend the thought process that would lead to such a statement, in no small part because it seems to betray a boggling inability to allow empathy and compassion be one’s moral signposts.

What it all comes down to, when you get to brass tacks, is internal moral values vs. external moral values. When we can empathize with other human beings, even those who are different from us, and allow ourselves to fully appreciate their essential humanness, treating them ethically becomes easy. When we do not–and often, religious prescriptions on behavior explicitly tell us not to–it becomes impossible. An intrinsic set of moral values is predicated on that foundation of reciprocal recognition of one another’s humanness, worth, and dignity.

Those who say without God or without fear of punishment there can be no morality seem blind to that reciprocal recognition of one another’s humanness, worth, and dignity. And those folks scare me.

Some thoughts on the Seven Virtues

A while ago, I started talking about the Seven Deadly Sins.

I am not terribly good at them; in fact, it took a while to remember what they were (greed, envy, sloth, lust, gluttony, pride, and wrath). Of the seven, the only one at which I have any skill is lust; in fact, I’ve put so many character points into lust I’m still forced to make default rolls for all six others.

I got to thinking about the Seven Deadly Sins, and wondering if there were Seven Virtues to go along with them. Apparently, there are; a few hundred years after the list of vices caught hold, someone decided there should be a similar list of virtues, and made such a list by negating the vices. The virtue Chastity was proposed as the opposite of Lust, for example, and the virtue Humility as the opposite of Pride. (Some of the others don’t really make a lot of sense; proposing Kindness as Envy’s opposite ignores the fact that people can simultaneously feel envious and behave kindly. But no matter.)

The negative version of the Seven Deadly Sins didn’t really seem to catch on, so Catholic doctrine has embraced a different set of virtues: prudence, justice, temperance, courage, faith, hope, and charity.

I look at that list, and find it a bit…underwhelming. We’ve given Christianity two thousand years to come up with a cardinal list of virtues in human thought and deed, and that’s the best it can do? It’s almost as disappointing as the list of Ten Commandments, which forbids working on Saturday and being disrespectful to your parents but not, say, slavery or rape, as I talked about here.

Now, don’t get me wrong, some of the things on the list of virtues I heartily endorse. Courage, that’s a good one. Justice is another good one, though as often as not people have an unfortunate tendency to perpetrate the most horrifying atrocities in its name. (Handy hint for the confused: “justice” and “vengeance” aren’t the same thing, and in fact aren’t on speaking terms with one another.) Temperance in opposing injustice is not a virtue, hope is that thing at the bottom of Pandora’s jar of evils, and faith…well, the Catholic catechism says that faith means “we believe in God and believe all that he has said and revealed to us,” and furthermore that we believe all “that Holy Church proposes for our belief.” In this sense, to quote Mark Twain, faith is believing what you know ain’t so. (On the subject of hope, though, it should be mentioned that Hesiod’s epic poem about Pandora says of women, “From her is the race of women and female kind: of her is the deadly race and tribe of women who live amongst mortal men to their great trouble, no helpmates in hateful poverty, but only in wealth.” So it is without an exuberance of cynicism that I might suggest there is perhaps a synchronicity between the ancient Greek and modern Catholic thinkings on the subject of the fairer sex.)

In any event, it seems that, once again, the traditional institutions charged with the prescription of human morality have proven insufficient to the task. In my musings on the Ten Commandments, I proposed a set of ten commandments that might, all things considered, prove a better moral guideline than the ten we already have, and it is with the same spirit I’d like to propose a revised set of Seven Cardinal Virtues.

Courage. I quite like this one. In fact, to quote Maya Angelou, “Courage is the most important of all the virtues, because without courage you can’t practice any other virtue consistently. You can practice any virtue erratically, but nothing consistently without courage.” So this one stays; in fact, I think it moves to the head of the list.

Prudence is a bit of an odd duck. Most simply, it means something like “foresight,” or perhaps “right thinking.” The Catholic Education Site defines prudence as the intellectual virtue which rightly directs particular human acts, through rectitude of the appetite, toward a good end. But that seems a bit tail-recursive to me; a virtue is that which directs you to do good, and doing good means having these virtues…yes, yes, that’s fine and all, but what is good? You can’t define a thing in terms of a quality a person has and then define that quality in terms of that thing!

So perhaps it might be better to speak of Beneficence, which is the principle of making choices that, first, do no harm to others, and, second, seek to prevent harm to others. The principle of harm reduction seems a better foundation for an ethical framework than the principle of “right action” without any context for the “right” bit. (I’m aware that a great deal of theology attempts to provide context for the virtue of prudence, but I remain unconvinced; I would find, for example, it is more prudent to deny belonging to a religion than to be hanged for it, simply on the logic that it is difficult for dead Utopians to build Utopia…)

Justice is another virtue I like, though in implementation it can be a bit tricky. Justice, when it’s reduced to the notion of an eye for an eye, becomes mere retribution. If it is to be a virtue, it must be the sort of justice that seeks the elevation of all humankind, rather than a list of rules about which forms of retaliation are endorsed against whom; formal systems of justice, being invented and maintained by corruptible humans, all too easily become corrupt. A system which does not protect the weakest and most vulnerable people is not a just system.

Temperance needs to go. Moderation in the pursuit of virtue is no virtue, and passion in the pursuit of things which improve the lot of people everywhere is no vice. And this virtue too easily becomes a blanket prohibition; the Women’s Christian Temperance Union, who were anything but temperate in their zeal to eradicate alcohol, failed to acknowledge that drinking is not necessarily, of and by itself, intemperate; and their intemperance helped create organized crime in the US, a scourge we have still been unable to eradicate.

In its place, I would propose Compassion, and particularly, the variety of compassion that allows us to see the struggles of others, and to treat others with kindness wherever and whenever possible, to the greatest extent we are able. It is a virtue arising from the difficult realization that other people are actually real, and so deserve to be treated the way we would have them treat us.

Faith and Hope seem, to be frank, like poor virtues to me, at least as they are defined by Catholicism. (There is a broader definition of “faith,” used by mainline Protestant denominations, that has less to do with accepting the inerrancy of the Church in receiving divine revelation and more to do with an assurance that, even in the face of the unknown, it’s possible to believe that one will be okay; this kind of faith, I can get behind.) Indeed, an excess of faith of the dogmatic variety leads to all sorts of nasty problems, as folks who have faith their god wants them to bomb a busy subway might illustrate. And hope (in the Catholic sense of “desiring the kingdom of heaven and eternal life as our happiness, placing our trust in Christ’s promises and relying not on our own strength, but on the help of the grace of the Holy Spirit”) can lead to inaction in the face of real-world obstacles–if we believe that once we get past the grave, nothing can go wrong, we might be disinclined to pursue happiness or oppose injustice in the here and now.

I would suggest that better virtues might be Integrity and Empathy. Integrity as a virtue means acting in accordance with one’s own stated moral precepts; but there’s more to it than that. As a virtue, integrity also means acknowledging when others are right; being intellectually rigorous, and mindful of the traps of confirmation bias and anti-intellectualism; and being clear about what we know and what we hope. (When, for example, we state something we want to be true but don’t know is true as a fact, we are not behaving with integrity.)

Empathy in this context means, first and foremost, not treating other people as things. It is related to compassion, in that it recognizes the essential humanity of others. As a moral principle, it means acknowledging the agency and rights of others, as we would have them acknowledge our agency and our rights.

Charity is, I think, a consequence arising from the applications of justice, compassion, and empathy, rather than a foundational virtue itself. In its place, I propose Sovereignty, the assumption that the autonomy and self-determinism of others is worthy of respect, and must not be infringed insofar as is possible without compromising one’s own self.

So bottom line, that gives us the following list of Seven Virtues: Courage, Beneficence, Justice, Compassion, Integrity, Empathy, and Sovereignty. I like this draft better than the one put forth by Catholicism. But coming up with a consistent, coherent framework of moral behavior is hard! What say you, O Interwebs?

Some Thoughts on Anti-Intellectualism as a Red Queen Problem

“Well, in our country,” said Alice, still panting a little, “you’d generally get to somewhere else — if you ran very fast for a long time, as we’ve been doing.”
“A slow sort of country!” said the Queen. “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run twice as fast as that!”
“I’d rather not try, please!” said Alice. “I’m quite content to stay here — only I am so hot and thirsty!”

— Lewis Carroll, Through the Looking Glass

“When we just saw that man, I think it was [biologist P.Z. Myers], talking about how great scientists were, I was thinking to myself the last time any of my relatives saw scientists telling them what to do they were telling them to go to the showers to get gassed … that was horrifying beyond words, and that’s where science – in my opinion, this is just an opinion – that’s where science leads you.”
— Ben Stein, Trinity Broadcasting System interview, 2008

What do spam emails, AIDS denial, conspiracy theories, fear of GM foods, rejection of global warming, antivaccination crusades, and the public school district of Tucson, Arizona banning Shakespeare’s The Tempest have in common?


A typical spam message in my inbox

The answer is anti-intellectualism. Anti-intellectualism–the rejection of scientific study and reason as tools for understanding the physical world, and the derision of people who are perceived as educated or “intellectual”–has deep roots in the soil of American civil discourse. John Cotton, theological leader of the Puritans of Massachusetts Bay, wrote in 1642, “the more learned and witty you bee, the more fit to act for Satan will you bee”–a sentiment many Evangelical Protestants identify with today. (Tammy Faye Bakker, wife of the disgraced former televangelist Jim Bakker, once remarked “it’s possible to educate yourself right out of a personal relationship with Jesus Christ.”)

It seems weird that such a virulent streak of anti-intellectualism should be present in the world’s only remaining superpower, a position the US achieved largely on the merits of its technological and scientific innovation. Our economic, military, and political position in the world were secured almost entirely by our ability to discover, invent, and innovate…and yet there is a broad swath of American society that despises the intellectualism that makes that innovation possible in the first place.

Liberals in the US tend to deride conservatives as ignorant, anti-intellectual hillbillies. It’s arguably easy to see why; the conservative political party in the US is actively, openly hostile to science and intellectualism. The Republican Party of Texas has written into the party platform a passage opposing the teaching of critical thinking in public school. Liberals scoff at conservatives who deny the science of climate change, teach that the world and everything in it is six thousand years old, and seek to ban the teaching of evolutionary science…all while claiming that GMO foods are dangerous and vaccines cause autism. Anti-intellectualism is an equal-opportunity phenomenon that cuts across the entire American political landscape. The differences in liberal and conservative rejection of science are merely matters of detail.

So why is it such a pervasive part of American cultural dialog? There are a lot of reasons. Anti-intellectualism is built into the foundation of US culture; the Puritans, whose influence casts a very long shadow over the whole of US society, were famously suspicious of any sort of intellectual pursuit. They came to the New World seeking religious freedom, by which they meant the freedom to execute anyone they didn’t like, a practice their European contemporaries were insufficiently appreciative of; and the list of people they didn’t like included any unfortunate person suspected of learning or knowledge. That suspicion lingers; we’ve never succeeded in purging ourselves of it entirely.

Those of a cynical nature like to suggest that anti-intellectualism is politically convenient It’s easier, so the narrative goes, to control a poorly educated populace, especially when that populace lacks even basic reasoning skills. If you’ve ever watched an evening of Fox News, it’s a difficult argument to rebut. One does not need to be all that cynical to suggest a party plank rejecting critical thinking skills is a very convenient thing to a political party that enshrines young-earth Creationism, for instance.

But the historical narrative and the argument from political convenience seem insufficient to explain the breathtaking aggressiveness of anti-intellectualism in the US today, particularly among political progressives and liberals, who are often smugly self-congratulatory about how successfully they have escaped the clutches of tradition and dogma.

I think there’s another factor, and that’s the Red Queen problem.

In evolutionary, biology, the Red Queen hypothesis suggests that organisms in competition with each other must continue to evolve and adapt merely to maintain the status quo. When cheetahs prey on gazelles, the fastest cheetahs will be most successful at catching prey; the fastest gazelles will be most successful at escaping cheetahs. So natural selection favors faster and faster gazelles and cheetahs as each adapts to the other. Parasites evolve and become more efficient at parasitizing their hosts, which develop more efficient defenses against the parasites. I would like to propose that the same hypothesis can help explain anti-intellectualism, at least in part.

As we head into the twenty-first century, the sum total of human knowledge is increasing exponentially. When I was in college in the late 1980s and early 1990s, my neurobiology professors taught me things–adult human brains don’t grow new neurons, we’re all born with all the brain cells we’ll ever have–that we now know not to be true. And that means anyone who wants to be educated needs to keep learning new things all the time, just to stay in one place.

Those who reject science like to say that science is flawed because it changes all the time. How can we trust science, they say, when it keeps changing? In fact, what’s flawed is such critics’ estimation of how complicated the natural world is, and how much there is to know about it. Science keeps changing because we keep shining lights into previously dark areas of understanding.

But it’s really hard to keep up. A person who wants to stay abreast of the state of the art of human understanding has to run faster and faster and faster merely to stay in one place. It’s fatiguing, not just because it means constantly learning new things, but because it means constantly examining things you believed you already knew, re-assessing how new discoveries fit into your mental framework of how the world works.

For those without the time, inclination, tools, and habits to keep up with the state of human understanding, scientists look like priests. We must merely accept what they say, because we don’t have the tools to fact-check them. Their pronouncements seem arbitrary, and worse, inconsistent; why did they say we never grow new brain cells yesterday, only to say the exact opposite today? If two different scientists say two different things, who do you trust?

If you don’t race to keep up with the Red Queen, that’s what it is–trust. You must simply trust what someone else says, because trying to wrap your head around what’s going on is so goddamn fatiguing. And it’s easier to trust people who say the same thing every time than to trust people who say something different today than what they said yesterday. (Or who, worse, yet, tell you “I don’t know” when you ask a question. “I don’t know” is a deeply unsatisfying answer. If a Bronze Age tribesman asks two people “What is the sun?” and one of them gives a fanciful story about a fire-god and a dragon, and the other says “I don’t know,” the answer about the fire-god and the dragon is far more satisfying, even in complete absence of any evidence that fire-gods or dragons actually exist at all.)

Science is comfortable with the notion that models and frameworks change, and science is comfortable with “I don’t know” as an answer. Human beings, rather less so. We don’t want to run and run to keep up with the Red Queen. We also don’t want to hear “I don’t know” as an answer.

So science, then, becomes a kind of trust game, not that much different from the priesthood. We accept the pronouncements of priests and scientists alike when they tell us things they want to hear, and reject them when they don’t. Political conservatives don’t want to hear that our industrial activity is changing the global climate; liberals don’t want to hear that there’s nothing wrong with GMO food. Both sides of the political aisle find common ground in one place: running after the Red Queen is just plain too much work.

Some thoughts on machine learning: context-based approaches

A nontrivial problem with machine learning is organization of new information and recollection of appropriate information in a given circumstance. Simple storing of information (cats are furry, balls bounce, water is wet) is relatively straightforward, and one common approach to doing this is simply to define the individual pieces of knowledge as objects which contain things (water, cats, balls) and descriptors (water is wet, water flows, water is necessary for life; cats are furry, cats meow, cats are egocentric little psychopaths).

This presents a problem with information storage and retrieval. Some information systems that have a specific function, such as expert systems that diagnose illness or identify animals, solve this problem by representing the information hierarchically as a tree, with the individual units of information at the tree’s branches and a series of questions representing paths through the tree. For instance, if an expert system identifies an animal, it might start with the question “is this animal a mammal?” A “yes” starts down one side of the tree, and a “no” starts down the other. At each node in the tree, another question identifies which branch to take—”Is the animal four-legged?” “Does the animal eat meat?” “Does the animal have hooves?” Each path through the tree is a series of questions that leads ultimately to a single leaf.

This is one of the earliest approaches to expert systems, and it’s quite successful for representing hierarchical knowledge and for performing certain tasks like identifying animals. Some of these expert systems are superior to humans at the same tasks. But the domain of cognitive tasks that can be represented by this variety of expert system is limited. Organic brains do not really seem to organize knowledge this way.

Instead, we can think of the organization of information in an organic brain as a series of individual facts that are context dependent. In this view, a “context” represents a particular domain of knowledge—how to build a model, say, or change a diaper. There may be thousands, tens of thousands, or millions of contexts a person can move within, and a particular piece of information might belong to many contexts.

What is a context?

A context might be thought of as a set of pieces of information organized into a domain in which those pieces of information are relevant to each other. Contexts may be procedural (the set of pieces of information organized into necessary steps for baking a loaf of bread), taxonomic (a set of related pieces of information arranged into a hierarchy, such as knowledge of the various birds of North America), hierarchical (the set of information necessary for diagnosing an illness), or simply related to one another experientially (the set of information we associate with “visiting grandmother at the beach).

Contexts overlap and have fuzzy boundaries. In organic brains, even hierarchical or procedural contexts will have extensive overlap with experiential contexts—the context of “how to bake bread” will overlap with the smell of baking bread, our memories of the time we learned to bake bread, and so on. It’s probably very, very rare in an organic brain that any particular piece of information belongs to only one context.

In a machine, we might represent this by creating a structure of contexts CX (1,2,3,4,5,…n) where each piece of information is tagged with the contexts it belongs to. For instance, “water” might appear in many contexts: a context called “boating,” a context called “drinking,” a context called “wet,” a context called “transparent,” a context called “things that can kill me,” a context called “going to the beach,” and a context called “diving.” In each of these contexts, “water” may be assigned different attributes, whose relevance is assigned different weights based on the context. “Water might cause me to drown” has a low relevance in the context of “drinking” or “making bread,” and a high relevance in the context of “swimming.”

In a contextually based information storage system, new knowledge is gained by taking new information and assigning it correctly to relevant contexts, or creating new contexts. Contexts themselves may be arranged as expert systems or not, depending on the nature of the context. A human doctor diagnosing illness might have, for instance, a diagnostic context that behaves similarly in some ways to the way a diagnostic expert system; a doctor might ask a patient questions about his symptoms, and arrive at her conclusion by following the answers to a single possible diagnosis. This process might be informed by past contexts, though; if she has just seen a dozen patients with norovirus, her knowledge of those past diagnoses, her understanding of how contagious norovirus is, and her observation of the similarity of this new patient’s symptoms to those previous patients’ symptoms might allow her to bypass a large part of the decision tree. Indeed, it is possible that a great deal of what we call “intuition” is actually the ability to make observations and use heuristics that allow us to bypass parts of an expert system tree and arrive at a leaf very quickly.

But not all types of cognitive tasks can be represented as traditional expert systems. Tasks that require things like creativity, for example, might not be well represented by highly static decision trees.

When we navigate the world around us, we’re called on to perform large numbers of cognitive tasks seamlessly and to be able to switch between them effortlessly. A large part of this process might be thought of as context switching. A context represents a domain of knowledge and information—how to drive a car or prepare a meal—and organic brains show a remarkable flexibility in changing contexts. Even in the course of a conversation over a dinner table, we might change contexts dozens of times.

A flexible machine learning system needs to be able to switch contexts easily as well, and deal with context changes resiliently. Consider a dinner conversation that moves from art history to the destruction of Pompeii to a vacation that involved climbing mountains in Hawaii to a grandparent who lived on the beach. Each of these represents a different context, but the changes between contexts aren’t arbitrary. If we follow the normal course of conversations, there are usually trains of thought that lead from one subject to the next; and these trains of thought might be represented as information stored in multiple contexts. Art history and Pompeii are two contexts that share specific pieces of information (famous paintings) in common. Pompeii and Hawaii are contexts that share volcanoes in common. Understanding the organization of individual pieces of information into different contexts is vital to understanding the shifts in an ordinary human conversation; where we lack information—for example, if we don’t know that Pompeii was destroyed by a volcano—the conversation appears arbitrary and unconnected.

There is a danger in a system being too prone to context shifts; it meanders endlessly, unable to stay on a particular cognitive task. A system that changes contexts only with difficulty, on the other hand, appears rigid, even stubborn. We might represent focus, then, in terms of how strongly (or not) we cling to whatever context we’re in. Dustin Hoffman’s character in Rain Man possesses a cognitive system that clung very tightly to the context he was in!

Other properties of organic brains and human knowledge might also be represented in terms of information organized into contexts. Creativity is the ability to find connections between pieces of information that normally exist in different contexts, and to find commonalities of contextual overlap between them. Perception is the ability to assign new information to relevant contexts easily.

Representing contexts in a machine learning system is a nontrivial challenge. It is difficult, to begin with, to determine how many contexts might exist. As a machine entity gains new information and learns to perform new cognitive tasks, the number of contexts in which it can operate might increase indefinitely, and the system must be able to assign old information to new contexts as it encounters them. If we think of each new task we might want the machine learning system to be able to perform as a context, we need to devise mechanisms by which old information can be assigned to these new contexts.

Organic brains, of course, don’t represent information the way computers do. Organic brains represent information as neural traces—specific activation pathways among collections of neurons.

These pathways become biased toward activation when we are in situations similar to those where they were first formed, or similar to situations in which they have been previously activated. For example, when we talk about Pompeii, if we’re aware that it was destroyed by a volcano, other pathways pertaining to our experiences with or understanding of volcanoes become biased toward activation—and so, for example, our vacation climbing the volcanoes in Hawaii come to mind. When others share these same pieces of information, their pathways similarly become biased toward activation, and so they can follow the transition from talking about Pompeii to talking about Hawaii.

This method of encoding and recalling information makes organic brains very good at tasks like pattern recognition and associating new information with old information. In the process of recalling memories or performing tasks, we also rewrite those memories, so the process of assigning old information to new contexts is transparent and seamless. (A downside of this approach is information reliability; the more often we access a particular memory, the more often we rewrite it, so paradoxically, the memories we recall most often tend to be the least reliable.)

Machine learning systems need a system for tagging individual units of information with contexts. This becomes complex from an implementation perspective when we recall that simply storing a bit of information with descriptors (such as water is wet, water is necessary for life, and so on) is not sufficient; each of those descriptors has a value that changes depending on context. Representing contexts as a simple array CX (1,2,3,4,…n) and assigning individual facts to contexts (water belongs to contexts 2, 17, 43, 156, 287, and 344) is not sufficient. The properties associated with water will have different weights—different relevancies—depending on the context.

Machine learning systems also need a mechanism for recognizing contexts (it would not do for a general purpose machine learning system to respond to a fire alarm by beginning to bake bread) and for following changes in context without becoming confused. Additionally, contexts themselves are hierarchical; if a person is driving a car, that cognitive task will tend to override other cognitive tasks, like preparing notes for a lecture. Attempting to switch contexts in the middle of driving can be problematic. Some contexts, therefore, are more “sticky” than others, more resistant to switching out of.

A context-based machine learning system, then, must be able to recognize context and prioritize contexts. Context recognition is itself a nontrivial problem, based on recognition of input the system is provided with, assignment of that input to contexts, and seeking the most relevant context (which may in most situations be the context with greatest overlap with all the relevant input). Assigning some cognitive tasks, such as diagnosing an illness, to a context is easy; assigning other tasks, such as natural language recognition, processing, and generation in a conversation, to a context is more difficult to do. (We can view engaging in natural conversation as one context, with the topics of the conversation belonging to sub-contexts. This is a different approach than that taken by many machine conversational approaches, such as Markov chains, which can be viewed as memoryless state machines. Each state, which may correspond for example to a word being generated in a sentence, can be represented by S(n), and the transition from S(n) to S(n+1) is completely independent of S(n-1); previous parts of the conversation are not relevant to future parts. This creates limitations, as human conversations do not progress this way; previous parts of a conversation may influence future parts.)

Context seems to be an important part of flexibility in cognitive tasks, and thinking of information in terms not just of object/descriptor or decision trees but also in terms of context may be an important part of the next generation of machine learning systems.

Some thoughts on government funding for research

Every time you buy a hard drive, some of your money goes to the German government.

That’s because in the late 1990s, a physicist named Peter Grünberg at the Forschungszentrum Jülich (Jülich Research Center) made a rather odd discovery.

The Jülich Research Center is a government-funded German research facility that explores nuclear physics, geoscience, and other fields. There’s a particle accelerator there, and a neutron scattering reactor, and not one or two or even three but a whole bunch of supercomputers, and a magnetic confinement fusion tokamak, and a whole bunch of other really neat and really expensive toys. All of the Center’s research money comes from the government–half from the German federal government and half from the Federal State of North Rhine-Westphalia.

Anyway, like I was saying, in the late 1990s, Peter Grünberg made a rather odd discovery. He was exploring quantum physics, and found that in a material made of several layers of magnetic and non-magnetic materials, if the layers are thin enough (and by “thin enough” I mean “only a few atoms thick”), the material’s resistance changes dramatically when it’s exposed to very, very weak magnetic fields.

There’s a lot of deep quantum voodoo about why this is. Wikipedia has this to say on the subject:

If scattering of charge carriers at the interface between the ferromagnetic and non-magnetic metal is small, and the direction of the electron spins persists long enough, it is convenient to consider a model in which the total resistance of the sample is a combination of the resistances of the magnetic and non-magnetic layers.

In this model, there are two conduction channels for electrons with various spin directions relative to the magnetization of the layers. Therefore, the equivalent circuit of the GMR structure consists of two parallel connections corresponding to each of the channels. In this case, the GMR can be expressed as

Here the subscript of R denote collinear and oppositely oriented magnetization in layers, χ = b/a is the thickness ratio of the magnetic and non-magnetic layers, and ρN is the resistivity of non-magnetic metal. This expression is applicable for both CIP and CPP structures.

Make of that what you will.


Conservatives and Libertarians have a lot of things in common. In fact, for all intents and purposes, libertarians in the United States are basically conservatives who are open about liking sex and drugs. (Conservatives and libertarians both like sex and drugs; conservatives just don’t cop to it.)

One of the many areas they agree on is that the governmet should not be funding science, particularly “pure” science with no obvious technological or commercial application.

Another thing they have in common is they don’t understand what science is. In the field of pure research, you can never tell what will have technological or commercial application.

Back to Peter Grünberg. He discovered that quantum mechanics makes magnets act really weird, and in 2007 he shared a Nobel Prize with French physicist Albert Fert, a researcher at the French Centre national de la recherche scientifique (French National Centre for Scientific Research), France’s largest government-funded research facility.

And it turns out this research had very important commercial applications:

You know how in the 80s and 90s, hard drives were these heavy, clunky things with storage capacities smaller than Rand Paul’s chances at ever winning the Presidency? And then all of a sudden they were terabyte this, two terabyte that?

Some clever folks figured out how to use this weird quantum mechanics voodoo to make hard drive heads that could respond to much smaller magnetic fields, meaning more of them could be stuffed on a magnetic hard drive platter. And boom! You could carry around more storage in your laptop than used to fit in a football stadium.

It should be emphasized that Peter Grünberg and Albert Fert were not trying to invent better hard drives. They were government physicists, not Western Digital employees. They were exploring a very arcane subject–what happens to magnetic fields at a quantum level–with no idea what they would find, or whether it would be applicable to anything.


So let’s talk about your money.

When it became obvious that this weird quantum voodoo did have commercial possibility, the Germans patented it. IBM was the first US company to license the patent; today, nearly all hard drives license giant magnetoresistance patents. Which means every time you buy a hard drive, or a computer with a hard drive in it, some of your money flows back to Germany.

Conservatives and libertarians oppose government funding for science because, to quote the Cato Institute,

[G]overnment funding of university science is largely unproductive. When Edwin Mansfield surveyed 76 major American technology firms, he found that only around 3 percent of sales could not have been achieved “without substantial delay, in the absence of recent academic research.” Thus some 97 percent of commercially useful industrial technological development is, in practice, generated by in-house R&D. Academic science is of relatively small economic importance, and by funding it in public universities, governments are largely subsidizing predatory foreign companies.

Make of that what you will. I’ve read it six times and I’m still not sure I understand the argument.

The Europeans are less myopic. They understand two things the Americans don’t: pure research is the necessary foundation for a nation’s continued economic growth, and private enterprise is terrible at funding pure research.

Oh, there are a handful of big companies that do fund pure research, to be sure–but most private investment in research comes after the pure, no-idea-if-this-will-be-commercially-useful, let’s-see-how-nature-works variety.

It takes a lot of research and development to get from the “Aha! Quantum mechanics does this strange thing when this happens!” to a gadget you have in your home. That also takes money and development, and it’s the sort of research private enterprise excels at. In fact, the Cato Institute cites many examples of biotechnology and semiconductor research that are privately funded, but these are types of research that generally already have a clear practical value, and they take place after the pure research upon which they rest.

So while the Libertarians unite with the Tea Party to call for the government to cut funding for research–which is working, as government research grants have fallen for the last several years in a row–the Europeans are ploughing money into their physics labs and research facilities and the Superconducting Supercollider, which I suspect will eventually produce a stream of practical, patentable ideas…and every time you buy a hard drive, some of your money goes to Germany.

Modern societies thrive on technological innovation. Technological innovation depends on understanding the physical world–even when it seems at first like there aren’t any obvious practical uses for what you learn. They know that, we don’t. I think that’s going to catch up with us.

Wrong in the age of Google: Memes as social identity

A short while ago, I published a tweet on my Twitter timeline that was occasioned by a pair of memes I saw posted on Facebook:

The memes in question have both been circulating for a while, which is terribly disappointing now that we live in the Golden Age of Google. They’re being distributed over an online network of billions of globally-connected devices…an online network of billions of globally-connected devices which lets people discover in just a few seconds that they aren’t actually true.

A quick Google search shows both of these memes, which have been spread across social media countless times, are absolute rubbish.

The quote attributed to Albert Einstein appears to have originated with a self-help writer named Matthew Kelly, who falsely attributed it to Einstein in what was probably an attempt to make it sound more legitimate. It doesn’t even sound like something he would have said.

The second is common on conservative blogs and decries the fact that Obamacare (or, sometimes, Medicaid) offer free health coverage to undocumented immigrants. In fact, Federal law bars undocumented immigrants from receiving Federal health care services or subsidies for health insurance, with just one exception: Medicaid will pay hospitals to deliver babies of undocumented mothers (children born in the United States are legal US citizens regardless of the status of their parents).

Total time to verify both of these memes on Google: less than thirty seconds.

So why, given how fast and easy it is to verify a meme before reposting it, does nobody ever do it? Why do memes that can be demonstrated to be true in less time than it takes to order a hamburger at McDonald’s still get so much currency?

The answer, I think, is that it doesn’t matter whether a meme is true. It doesn’t matter to the people who post memes and it doesn’t matter to the people who read them. Memes aren’t about communication, at least not communication of facts and ideas. They are about social identity.


Viewed through the lens of social identity, memes suddenly make sense. The folks who spread them aren’t trying to educate, inform, or communicate ideas. Memes are like sigils on a Medieval lord’s banner: they indicate identity and allegiance.

These are all memes I’ve seen online in the last six weeks. What inferences can we make about the people who posted them? These memes speak volumes about the political identities of the people who spread them; their truthfulness doesn’t matter. We can talk about the absurdity of Oprah Winfrey’s reluctance to pay taxes or the huge multinational banks that launder money for the drug cartels, and both of those are conversations worth having…but they aren’t what the memes are about.

It’s tempting to see memes as arguments,especially because they often repeat talking points of arguments. But I submit that’s the wrong way to view them. They may contain an argument, but their purpose is not to try to argue; they are not a collective debate on the merits of a position.

Instead, memes are about identifying the affiliations of the folks who post them. They’re a way of signaling in-group and out-group status. That makes them distinct from, say, the political commentary in Banksy’s graffiti, which I think is more a method of making an argument. Memes are a mechanism for validating social identity. Unlike graffiti, there’s no presupposition the memes will be seen by everyone; instead, they’re seen by the poster’s followers on social media–a self-selecting group likely to already identify with the poster.

Even when they’re ridiculously, hilariously wrong. Consider this meme, for example. It shows a photograph of President Barack Obama receiving a medal from the king of Saudi Arabia.

The image is accurate, thought the caption is not. The photo shows Barack Obama receiving the King Abdul Aziz Order of Merit from King Abdullah. It’s not unconstitutional for those in political office to receive gifts from foreign entities, provided those gifts are not kept personally, but are turned over to the General Services Administration or the National Archives.

But the nuances, like I said, don’t matter. It doesn’t even matter that President George W. Bush received the exact same award while he was in office:

If we interpret memes as a way to distribute facts, the anti-Obama meme is deeply hypocritical, since the political conservatives who spread it aren’t bothered that a President on “their” side received the same award. If we see memes as a way to flag political affiliation, like the handkerchiefs some folks in the BDSM community wear in their pockets to signal their interests, it’s not. By posting it, people are signaling their political in-group.

Memes don’t have to be self-consistent. The same groups that post this meme:

also tend by and large to support employment-at-will policies giving employers the right to fire employees for any reason, including reasons that have nothing to do with on-the-job performance…like, for instance, being gay, or posting things on Facebook the employer doesn’t like.

Memes do more than advertise religious affiliation; they signal social affiliation as well.

Any axis along which a sharp social division exists will, I suspect, generate memes. I also suspect, though I think the phenomenon is probably too new to be sure, that times of greater social partisanship will be marked by wider and more frequent distribution of memes, and issues that create sharper divides will likewise lead to more memes.

There are many ideas that are “identity politics”–ideas that are held not because they’re supported by evidence, but simply because they are a cost of entry to certain groups. These ideas form part of the backbone of a group; they serve as a quick litmus test of whether a person is part of the out-group or the in-group.

For example, many religious conservatives reflexively oppose birth control for women, even if the majority of its members, like the majority of women in the US at large, use it. Liberals reflexively oppose nuclear power, even though it is by far the safest source of power on the basis of lives lost per terawatt hour of electricity produced. The arguments used to support these ideas (“birth control pills cause abortions,” “nuclear waste is too dangerous to deal with”) are almost always empirically, demonstrably false, but that’s irrelevant. These ideas are part of a core set of values that define the group; holding them is about communicating shared values, not about true and false.

Unfortunately, these core identity ideas often lead directly not only to misinformation and a distorted worldview, but to actual human suffering. Opposition to vaccination and genetically modified foods are identity ideas among many liberals; conservatives oppose environmental regulation and deny human involvement in climate change as part of their identity ideas. These ideas have already led to human suffering and death, and are likely to lead to more.

Human beings are social animals capable of abstract reasoning, which perhaps makes it inevitable that abstract ideas are so firmly entrenched in our social structures. Ideas help define our social structures, identify in-group and out-group members, and signal social allegiances. The ideas we present, even when they take the form of arguments, are often not attempts at dialog so much as flags that let others know which lord we march for. Social media memes are, in that way, more accurately seen as house sigils than social discourse.

What my cat teaches me about divine love

This is Beryl.

Beryl is a solid blue Tonkinese cat. He shares a home with (I would say he belongs to, but the reverse may be true) zaiah and I, and spends a good deal of each day perched on my shoulder. I write from home, and whenever I’m writing, there’s a pretty good chance he’s on my shoulder, nuzzling my ear and purring.

He’s a sweetheart–one of the sweetest cats I’ve ever known, and believe me when I say I’ve known a lot of cats.

Whenever we’re in the bedroom, Beryl likes to sit on a pillow atop the tall set of shelves we have on the wall next to the bed. It didn’t take him long to learn that the bed is soft, so rather than climbing down off the top of the shelves, he will often simply leap, legs all outstretched like a flying squirrel’s, onto the bed.

Now, if I wanted to, I could get a sheet of plywood, put it on top of the bed, then put the blanket over top of it. That way, when Beryl leapt off the shelves, he’d be quite astonished to have his worldview abruptly and unpleasantly upended.

But I wouldn’t do that. I wouldn’t do that for two reasons: (1) I love my cat, and (2) it would be an astonishingly dick thing to do.

That brings us to God.

This is a fossil.

More specifically, it’s a fossil of Macrocranion tupaiodon, an extinct early mammal that lived somewhere between 56 and 34 million years ago and went extinct during the Eocene–Oligocene extinction event.

Now, there are very, very few things in this world that conservative Orthodox Jews, Fundamentalist Muslims, and Evangelical Christians will agree on, but one thing that some of these folks do have in common is the notion that fossils like this one do not actually represent the remains of long-vanished animals, because the world is much younger than what such fossils suggest. Most conservative Muslims are more reasonable on this point than their other Abrahamic fellows, though apparently the notion of an earth only a few thousand years old is beginning to take hold in some parts of the Islamic ideosphere.

That presents a challenge; if the world is very young, whence the fossils? And one of the many explanations put forth to answer the conundrum is the idea that these fossils were placed by a trickster God (or, in some versions of the story, allowed by God to be placed by the devil) for the purpose of testing our faith.

And this, I find profoundly weird.

The one other thing all these various religious traditions agree on is God loves us* (*some exclusions and limitations apply; offer valid only for certain select groups and/or certain types of people; offer void for heretics, unbelievers, heathens, idolators, infidels, skeptics, blasphemers, or the faithless).

And I can’t quite wrap my head around the notion of deliberately playing this sort of trick on the folks one loves.

Yes, I could put a sheet of plywood on my bed and cover it with a blanket. But to what possible end? I fear I lack the ability to rightly apprehend what kind of love that would show to my cat.

Which leads me to the inescapable conclusion that a god that would deliberately plant, or allow to be planted, fake evidence contradicting the approved account of creation would be a god that loved mankind rather less than I love my cat.

It seems axiomic to me that loving someone means having their interests and their happiness at heart. Apparently, however, the believers have a rather more unorthodox idea of love. And that is why, I think, one should perhaps not trust this variety of believer who says “I love you.” Invite such a person for dinner, but count the silverware after.

Of Android, iOS, and the Rule of Two Thousand, Part II

In part 1 of this article, I blogged about leaving iOS when I traded my iPhone for an Android-powered HTC Sensation 4G, and how I came to detest Android in spite of its theoretical superiority to iOS and came back to the iPhone.

Part 1 talked about the particular handset I had, the T-Mobile version of the Sensation, a phone with such ill-conceived design, astronomically bad build quality, and poor reliability that at the end of the year I was on my third handset under warranty exchange–every one of which failed in exactly the same way.

Today, in Part 2, I’d like to talk about Android itself.


When I first got my Sensation, it was running Android 2.3, code-named “Gingerbread.” Android 3 “Honeycomb” had been out for quite some time, but it was a build aimed primarily at tablets, not phones. When I got my phone, Android 4 “Ice Cream Sandwich” was in the works, ready to be released shortly.

That led to one of my first frustrations with the Android ecosystem–the shoddy, patchwork way that operating system updates are released.

My phone was promised an update in the second half of 2011. This gradually changed to Q4 2011, then to December 2011, then to January 2012, then to Q1 2012. It was finally released on May 16 of 2012, nearly six months after it had been promised.

And I got off lucky. Many Motorola users bought smart phones just before the arrival of Android 4; their phones came with a written guarantee that an update to Android 4 would be published for their phones. It never happened. To add insult to injury, Motorola released a patch for these phones that locked the bootloader, rendering the phone difficult or impossible to upgrade manually with custom ROMs–so even Android enthusiasts couldn’t upgrade the phones.

Now, this is not necessarily Google’s fault. Google makes the base operating system; it is the responsibility of the individual handset manufacturers to customize it for their phones (which often involves shoveling a lot of crapware and garbage programs onto the phone) and then release it for their hardware. Google has done little to encourage manufacturers to backport Android, nor to get manufacturers to offer a consistent user experience with software updates, instead leaving the device manufacturers free to do pretty much as they choose except actually fork Android themselves…which has led to what developers call “platform fragmentation” and to what Motorola Electrify, Photon and Atrix users call things I shan’t repeat in a blog as family-friendly as this one.

But what of the operating system itself?

Well, it’s a mixed bag of mess.


When I first got my Android phone, I noted how the user interface seemed to have been designed by throwing a box of buttons and dialogs and menus over one’s shoulder and then wired up wherever they hit. System settings were scattered in three different places, without it necessarily being obvious where you might find any particular setting. Functionality was duplicated in different places. The Menu button is a mess; it’s filled with whatever the programmer couldn’t find a better place for, with little thought to good UI design.

Android is built on Linux, an operating system that has a great future on the desktop ahead of it, and always will. The Year of Linux on the Desktop was 2000 was 2002 was 2005 was 2008 was 2009 was 2012 will be 2013. Desktop aside, Linux has been a popular server choice for a very long time, because one thing Linux genuinely has going for it is rock-solid reliability. When I was working in Atlanta, I had a Linux Gentoo server that had accumulated well over two years’ continuous uptime and was shut down only because it needed to be moved.

So it is somewhat consternating that Linux on cell phones seems rather fragile.

So fragile, in fact, that my HTC Sensation would pop up a “New T-Mobile Service Notice” alert every week, reminding me to restart the phone. Even the network operators, it would seem, have little confidence in Android’s stability.

It’s a bit disappointing that the one thing I most like about Linux seems absent from Android. Again, though, this might not be Google’s fault directly; the handset makers and network operators do this to themselves, by taking Android and packaging it up with a bunch of craplets of spotty reliability.

One of the things that it is really, really important to be aware of in the Android ecosystem is the way the money flows. You, as a cell phone owner, are not Google’s customer. Google’s customer is the handset manufacturer. You, as as a cell phone owner, are not the handset manufacturer’s customer. The handset manufacturer’s customer is the network operator. You are the network operator’s customer–but you are not the network operator’s only customer.

Because of this, the handset maker and the network operator will seek additional revenue streams whenever they can. If someone offers HTC money to bundle some crap app on their phones, HTC will do it. If T-Mobile decides it can get more revenue by bundling its own or someone else’s crap app on your phone, it will.

Not only are you not the customer, at some points along the chain–for the purposes of Google ad revenue, say–you are the product being sold. Whenever you hear people talking about “freedom” or “openness” in the Android ecosystem, never forget that.

I sometimes travel outside the US, mainly to Canada these days. When I do that, my phone really, really, really wants me to turn on data roaming.

There are reasons for that. When you roam, especially internationally, the telcos charge rates for data that would make a Mafia loan shark blush. So Android agreeably nudges you to turn on data roaming, and here’s kind of a sticking point…

Even if you’re connected to the Internet via wifi.

It pops up an alert constantly, and by “constantly” I really do mean constantly. Even when you have wifi access, it pops up every time you switch applications, every time you unlock the phone, and about every twenty minutes when you aren’t using the phone.

Just think of it as Google’s way to help the telcos tap your ass that revenue stream.

This multiple-revenue-streams-from-multiple-customers model has implications, not only for the economics of the ecosystem, but for the reliability of your phone as well. And even for the battery life of your phone.

Take HTC phones on T-Mobile (please!). They come shoveled–err, “bundled”–with an astonishing array of crap. HTC’s mediocre Facebook app. HTC Peep, HTC’s much-worse-than-mediocre Twitter client. Slacker Radio, a client for a B-list Internet radio station.

The presence of all the various crapware that comes preloaded on most Android phones, plus the fact that Android apps don’t quit when they lose focus, generally means that a task manager app is a necessary addition to any Android system…which is fine for the computer literate, but less optimal for folks who aren’t so computer savvy.

And it doesn’t always help.

For example, Slacker Radio on my Sensation insists on running all the time at startup, whether I want it to or not:

Killing it with the task manager never works. Within ten minutes after being killed, it somehow respawns, like a zombie in a George Ramero movie, shambling after you no matter how many times you shoot it:

The App Manager in the Android control panel has a function to disable an app entirely, even if it’s set to launch at startup. For reasons I was never able to understand, this did not work with Slacker. It was always there. Always. There. It. Never. Goes. Away. You. Can’t. Hide. From. It.

Speaking of that “disable app” functionality…

Oh, goddamnit, no, I don’t want to turn on data roaming. Speaking of that “disable app” functionality, use it with care! I soon learned that disabling some bundled apps can have…unfortunate consequences.

Like HTC Peep, for instance. It’s the only Twitter client for smartphones I have yet found that is even worse than the official Twitter client for smartphones. It loads a system service at startup (absent from the Task Killer screenshots above because I have the task killer set not to display system services). If you let it, it will download your Twitter feed.

And download your Twitter feed.

And download your Twitter feed. It does not cache any of the Twitter messages you read; every time you start its user interface, it re-downloads the whole thing again. The result, as you might imagine, is eyewatering amounts of data usage. If you aren’t one of the lucky few who still has a truly unmetered data plan, think twice about letting Peep have your Twitter account information!

Oh, and don’t try to disable it in the application control panel. If you do, the phone’s unlock screen doesn’t work any more, as I discovered to my chagrin. Seriously.

The official Twitter app isn’t much better…

…but at least it isn’t necessary to unlock the damn phone.

All this crapware does more than eat memory, devour bandwidth, and slow the phone down. It guzzles battery power, too. One of the default Google apps, Google Maps, also starts a service each time the phone boots up, and man, does it hog the battery juice…even if you don’t use Maps at all. (This screen shot, for instance, was taken at a point in time when I hadn’t touched the Maps app in days.)

You will note the battery is nearly exhausted after only four hours and change. I eventually took to killing the Maps service whenever I restarted the phone, which seems to have improved the HTC’s mediocre battery life without actually affecting Maps when I went to use it.

Another place where Android’s lack of a clear and consistent user interface–

AAAAARGH! NO! NO, YOU PATHETIC FUCKING EXCUSE OF A THING, I DO NOT WANT TO TURN ON DATA ROAMING! THAT’S WHY I SAID ‘NO’ THE LAST 167 TIMES YOU ASKED! SO HELP ME, YOU ASK ME ONE MORE TIME AND I WILL TIP YOU STRAIGHT INTO THE NEAREST EMERGENCY INTELLIGENCE INCINERATOR! @$#%%#@!

Sorry, where was I?

Oh, yes. Another place where Android’s lack of a clear and consistent user interface is its contact management, which is surely one of the more straightforward bits of functionality any smart phone should have.

Android gives you, or perhaps “makes you take responsibility for,” a level of granularity of the inner workings of its contact database that really seems inappropriate.

It makes distinctions between contacts which are stored on your SIM card, contacts which are stored in the Google contact manager (and synced to the Google cloud), and contacts which are stored in other ways. There are, all in all, about half a dozen ways to store contacts–card, Google cloud, T-Mobile cloud, phone memory card. They all look pretty much the same when you’re browsing your contacts, but different ways to store them have different limitations on the type of data that can be stored.

Furthermore, it’s not immediately obvious how and where any particular contact is stored. Things you might think are being synced by Google might not actually be.

And worse, you can’t, as near as I was ever able to tell, export all your contacts at once. Oh, you can export them, all right; Android lets you save them in a .vcf file which you can then bring to another phone or sync with your computer. But you can’t export ALL of them. You have to choose which SET you export: export all the contacts on your SIM card? Export all your Google contacts? Export all your locally-saved-on-the-phone-memory-card contacts?

When I was in getting my second warranty replacement phone, I asked the technician if there was an easy way to take every contact on the phone and save all of them in one export. He said, no, there really isn’t; what he recommended I do was export each group to a different file, then import all those files to my Google contact list, and then finally delete all the duplicates from all the other contact lists.

It worked, but seriously? This is stupid user interface design. It’s a user interface misfeature you might not ever encounter if you always (though luck or choice) save your contacts to the same set, but if for whatever reason you haven’t, God help you.

Yes, I can see why you might want to have separate contact lists, stored and backed up separately. No, that does not excuse the lack of any reasonable way to identify, sort, and merge those contact lists. C’mon, Google engineers, you aren’t even trying.

And speaking of brain-dead user interface design, how about this alert?

What the fuck, Google?

Okay, I get it, I get it. WiFi sharing uses a lot of battery power. The flash uses battery power. Android is just looking out for my best interests, trying to save my battery…

…but don’t all the Fandroids carry on about how much better Android is because it doesn’t force you to do what it thinks is best for you, it lets you decide for yourself? Again I say, what the fuck, Google?


So far, I have complained mostly about the visible bits of Android, the user interface failings and design decisions that demonstrate a lack of any sort of rigorous, cohesive approach to UI design.

Unfortunately, the same problems apply to the internals of Android, too.

One early design decision Google made in the first days of Android concerns the way it handles screen redraws. Google intended for Android to be portable to a wide range of phones, from low-end phones to full-featured smartphones, and so Android does not make use of the same level of GPU acceleration that iOS does. Instead, it uses the CPU to perform many drawing tasks.

This has performance and use implications.

User interface drawing occurs in an application’s main execution thread and is handled primarily by the CPU. (Technically speaking, each element on the screen–buttons, widgets, and so on–is rendered by the CPU, then the GPU handles the compositing.) That means that applications will often block while screen redraws are happening. On HTC Sense, for instance, if you put a clock on the home screen and then you start switching between screens, the clock will freeze for as long as your finger is on the screen.

It also means that things like populating a scrolling list is far slower on Android than it is on iOS, even if the Android device has theoretically better specs. Lists are populated by the CPU, and when you scroll through a list, the entire list is redrawn with each pixel it moves. On iOS, the list is treated as a 2D OpenGL surface; as you scroll through it, the GPU is responsible for updating it. Even on smartphones with fast processors, this sometimes causes noticeable UI sluggishness. Worse, if the CPU is interrupted by something else, like updating a background task or doing a memory garbage collect, the UI freezes for an instant.

Each successive version of Android has accelerated more graphics functions. Android 4 is significantly better than Android 2.3 in this regard. User input can still be blocked during CPU activity, and background tasks still don’t update UI elements while a foreground thread is doing so (I was disappointed to note that in Android 4, the clock still freezes when you swap pages in HTC Sense), but Android 4’s graphics performance is way, way, waaaaaaay better than it was in 2.3.

There are still some limitations, though. Because UI updates occur in the main execution thread, even in Android 4, background tasks can still end up being blocked while UI updates are in effect. This actually means there are some screen captures I wanted to show you, but can’t.


One place where Android falls down compared to iOS is in its built-in touch keyboard. Yes, hardcore geeks prefer physical keyboards, and Android was developed by hardcore geeks, which might be part of the reason Android’s touch keyboard is so lackluster.

One problem I had in Android 2.3 that I really, really hoped Android 4 would fix, and was sad to note that it didn’t, is that occasionally the touch keyboard just simply does not work.

Intermittently, usually once or twice a day, I would bring up an app–the SMS messenger, say, or a notepad, or the IMO IM messenger, and I’d start typing. The phone would buzz on each keypress, the key would flash like it does…but nothing would happen. No text would be entered.

And I’d quit the app, and relaunch it, and everything would be fine. Or it wouldn’t, and I’d quit and relaunch the app again, and if it still wasn’t fine, I’d reboot the phone, and force quit Google Maps in the task manager, and everything would be fine.

I tried very hard to get a screen capture of this, but it turns out the screen capture functionality doesn’t work when your finger is on the touch keyboard. As long as your finger is on the keyboard, the main execution thread is busy drawing, and background functions like screen grabs are blocked.

Speaking of the touch keyboard, there’s one place iOS really shines over Android, and that’s telling where your finger is at on the screen.

That’s harder than it sounds. For one, the part of your finger that first makes contact with the screen might not be where you think it is; it’s not always right in the middle of your finger. For another, when your finger touches the screen, it’s not just a single x,y point that’s being activated. Your finger is big–when you have a high-resolution screen, it’s bigger than you think. A whole lot of area on the touch screen is being activated.

So a lot more deep programming voodoo goes on behind the scenes to figure out where you intended to touch than you might think.

The keys on an iPhone touch keyboard are physically smaller on the screen than they are on an Android screen, and Android screens are often bigger than iOS screens, too. You’d think that would mean it’s easier to type on an Android phone than an iPhone.

And you’d be wrong. I have found, consistently and repeatably, that my typing accuracy is much better on an iPhone than an Android phone, even when the Android phone has a bigger screen and a bigger keyboard. (One of my friends complains that I have fewer hilarious typos and bizarre autocorrects in my text messages now, since I switched back to the iPhone.)

The deep voodoo in iOS appears to be better than the deep voodoo in Android, and yes, I calibrated my touch screen in Android.

Now, you can get third-party keyboards on Android that are much better. The Swiftkey keyboard for Android is awesome, and I love it. It’s a lot more sophisticated than any other keyboard I’ve tried, no question.

But goddamnit, here’s the thing…if you pay hundreds of dollars for a smart phone with a built-in touch keyboard, you shouldn’t HAVE to buy a third-party keyboard to get good results. Yes, they exist, but that does not excuse the pathetic performance of the stock Android keyboard! It’s like saying “Well, this new operating system isn’t very good at loading files, but that’s not a problem because you can buy a third-party file loader.” The user Should. Not. Have. To. Do. This.

And even if you do buy it, you’re still not paying for the amount of R&D that went into it. It’s a losing proposition for the developer AND for the users.


My new iPhone included iOS 6, which feels much more refined than Android on almost every level.

I would be remiss, however, if I didn’t mention what a lot of folks see at the Achille’s heel of iOS: its Maps app.

Early iPhones used Google Maps, a solid piece of work that lacked some basic functionality, such as turn-by-turn directions. When I moved to Android, I wrote about how the Maps app in Android was head, shoulders, torso, and kneecaps above the Maps app in iOS, and it was one of the best things about Android.

And then Android 4 came along.

I don’t know what happened to Maps in Android 4. Maybe it’s just a problem on the Sensation. Maybe it’s an issue where the power manager is changing the processor clock speed and Maps doesn’t notice. I don’t know.

But in Android 4, the cheery synthesized female voice that the turn-by-turn directions used got a little…weird.

I mean, it always was weird; you should hear how it pronounces “Caesar E. Chavez Blvd” (something Maps in iOS 6 pronounces just fine, actually). But it got weirder, in that it would alternate between dragging like a record player (does anyone remember those?) with a bad motor and then suddenly speeding up until it sounded like it was snorting a mixture of helium and crystal meth.

It was a bit disconcerting: “In two hundred feet, turn llllllllllleeeeeeeeeeffffffffftttttttt oooooooooonnnnnnnnn twwwwwwwwwwwwweeeeeeeeeeennnnnnnnttttyyyyyyyy–SECONDAVENUEANDTHENTURNRIGHT!” There was never a rhyme or reason to it; it never happened consistently on certain words or in certain places.

Now, Maps on iOS has been slammed all over Hell and back by the Internetverse. Any mapping program is going to have glitches (Google places a street that a friend of mine lives on about two and a half miles from where it actually is, in the middle of an empty field), but iOS apparently has a whole lot of very silly errors.

I say “apparently” because I haven’t personally encountered any yet, knock on data.

It was perhaps inevitable that Apple should eventually roll their own app (if by “roll their own” you mean “buy map data from Tom Tom”), because Google refused to license turn-by-turn mapping to Apple, so as to create a product differentiation point to make bloggers like me say things like “Wow, Google’s Android Map app sure is better than the one on iOS!” That was a strategy that couldn’t last forever, and Google should have known that, but… *shrug* Whatever. Since Google lost the contract to supply the Maps app to Apple, they took a hit larger than their total Android revenue; if they want to piss it away because they didn’t want Apple to have turn-by-turn directions, I think they really couldn’t have expected anything else.

In part 3 of this thing, I’ll talk about T-Mobile, and how they’re so hopelessly dysfunctional as a telecommunication provider they make the North Korean government look like a model of efficiency.

Some thoughts on post-scarcity societies

One of my favorite writers at the moment is Iain M. Banks. Under that name, he writes science fiction set in a post-scarcity society called the Culture, where he deals with political intrigue and moral issues and technology and society on a scale that almost nobody else has ever tried. (In fact, his novel Use of Weapons is my all-time favorite book, and I’ve written about it at great length here.) Under the name Iain Banks, he writes grim and often depressing novels not related to science fiction, and wins lots of awards.

The Culture novels are interesting to me because they are imagination writ large. Conventional science fiction, whether it’s the cyberpunk dystopia of William Gibson or the bland, banal sterility of (God help us) Star Trek, imagines a world that’s quite recognizable to us….or at least to those of us who are white 20th-century Westerners. (It’s always bugged me that the alien races in Star Trek are not really very alien at all; they are more like conventional middle-class white Americans than even, say, Japanese society is, and way less alien than the Serra do Sol tribe of the Amazon basin.) They imagine a future that’s pretty much the same as the present, only more so; “Bones” McCoy, a physician, talks about how death at the ripe old age of 80 is part of Nature’s plan, as he rides around in a spaceship made by welding plates of steel together.


Image from Wikimedia Commons by Hill – Giuseppe Gerbino

In the Culture, by way of contrast, everything is made by atomic-level nanotech assembly processes. Macroengineering exists on a huge scale, so huge that the majority of the Culture’s citizens by far live on orbitals–artificially constructed habitats encircling a star. (One could live on a planet, of course, in much the way that a modern person could live in a cave if she wanted to; but why?) The largest spacecraft, General Systems Vehicles, have populations that range from the tens of millions ot six billion or more. Virtually limitless sources of energy (something I’m panning to blog about later) and virtually unlimited technical ability to make just about anything from raw atoms means that there is no such thing as scarcity; whatever any person needs, that person can have, immediately and for free. And the definition of “person” goes much further, too; whereas in the Star Trek universe, people are still struggling with the idea that a sentient android might be a person, in the Culture, personhood theory (something else about which I plan to write) is the bedrock upon which all other moral and ethical systems are built. Many of the Culture’s citizens are drones or Minds–non-biological computers, of a sort, that range from about as smart as a human to millions of times smarter. Calling them “computers” really is an injustice; it’s about on par with calling a modern supercomputer a string of counting beads. Spacecraft and orbitals are controlled by vast Minds far in advance of unaugmented human intellect.

I had a dream, a while ago, about the Enterprise from Star Trek encountering a General Systems Vehicle, and the hilarity that ensued when they spoke to each other: “Why, hello, Captain Kirk of the Enterprise! I am the GSV Total Internal Reflection of the Culture. You came here in that? How…remarkably courageous of you!”

And speaking of humans…

The biological people in the Culture are the products of advanced technology just as much as the Minds are. They have been altered in many ways; their immune systems are far more resilient, they have much greater conscious control over their bodies; they have almost unlimited life expectancies; they are almost entirely free of disease and aging. Against this backdrop, the stories of the Culture take place.

Banks has written a quick overview of the Culture, and its technological and moral roots, here. A lot of the Culture novels are, in a sense, morality plays; Banks uses the idea of a post-scarcity society to examine everything from bioethics to social structures to moral values.


In the Culture novel, much of the society is depicted as pretty Utopian. Why wouldn’t it be? There’s no scarcity, no starvation, no lack of resources or space. Because of that, there’s little need for conflict; there’s neither land nor resources to fight over. There’s very little need for struggle of any kind; anyone who wants nothing but idle luxury can have it.

For that reason, most of the Culture novels concern themselves with Contact, that part of the Culture which is involved with alien, non-Culture civilizations; and especially with Special Circumstances, that part of Contact whose dealings with other civilizations extends into the realm of covert manipulation, subterfuge, and dirty tricks.

Of which there are many, as the Culture isn’t the only technologically sophisticated player on the scene.

But I wonder…would a post-scarcity society necessarily be Utopian?

Banks makes a case, and I think a good one, for the notion that a society’s moral values depend to a great extent on its wealth and the difficulty, or lack thereof, of its existence. Certainly, there are parallels in human history. I have heard it argued, for example, that societies from harsh desert climates produce harsh moral codes, which is why we see commandments in Leviticus detailing at great length and with an almost maniacal glee who to stone, when to stone them, and where to splash their blood after you’ve stoned them. As societies become more civil more wealthy, as every day becomes less of a struggle to survive, those moral values soften. Today, even the most die-hard of evangelical “execute all the gays” Biblical literalist rarely speaks out in favor of stoning women who are not virgins on their wedding night, or executing people for picking up a bundle of sticks on the Sabbath, or dealing with the crime of rape by putting to death both the rapist and the victim.

I’ve even seen it argued that as civilizations become more prosperous, their moral values must become less harsh. In a small nomadic desert tribe, someone who isn’t a team player threatens the lives of the entire tribe. In a large, complex, pluralistic society, someone who is too xenophobic, too zealous in his desire to kill anyone not like himself, threatens the peace, prosperity, and economic competitiveness of the society. The United States might be something of an aberration in this regard, as we are both the wealthiest and also the most totalitarian of the Western countries, but in the overall scope of human history we’re still remarkably progressive. (We are becoming less so, turning more xenophobic and rabidly religious as our economic and military power wane; I’m not sure that the one is directly the cause of the other but those two things definitely seem to be related.)

In the Culture novels, Banks imagines this trend as a straight line going onward; as societies become post-scarcity, they tend to become tolerant, peaceful, and Utopian to an extreme that we would find almost incomprehensible, Special Circumstances aside. There are tiny microsocieties within the Culture that are harsh and murderously intolerant, such as the Eaters in the novel Consider Phlebas, but they are also not post-scarcity; the Eaters have created a tiny society in which they have very little and every day is a struggle for survival.


We don’t have any models of post-scarcity societies to look at, so it’s hard to do anything beyond conjecture. But we do have examples of societies that had little in the way of competition, that had rich resources and no aggressive neighbors to contend with, and had very high standards of living for the time in which they existed that included lots of leisure time and few immediate threats to their survival.

One such society might be the Aztec empire, which spread through the central parts of modern-day Mexico during the 14th century. The Aztecs were technologically sophisticated and built a sprawling empire based on a combination of trade, military might, and tribute.

Because they required conquered people to pay vast sums of tribute, the Aztecs themselves were wealthy and comfortable. Though they were not industrialized, they lacked for little. Even commoners had what was for the time a high standard of living.

And yet, they were about the furthest thing from Utopian it’s possible to imagine.

The religious traditions of the Aztecs were bloodthirsty in the extreme. So voracious was their appetite for human sacrifices that they would sometimes conquer neighbors just to capture a steady stream of sacrificial victims. Commoners could make money by selling their daughters for sacrifice. Aztec records document tens of thousands of sacrifices just for the dedication of a single temple.

So they wanted for little, had no external threats, had a safe and secure civilization with a stable, thriving economy…and they turned monstrous, with a contempt for human life and a complete disregard for human value that would have made Pol Pot blush. Clearly, complex, secure, stable societies don’t always move toward moral systems that value human life, tolerate diversity, and promote individual dignity and autonomy. In fact, the Aztecs, as they became stronger, more secure, and more stable, seemed to become more bloodthirsty, not less. So why is that? What does that say about hypothetical societies that really are post-scarcity?

One possibility is that where there is no conflict, people feel a need to create it. The Aztecs fought ritual wars, called “flower wars,” with some of their neighbors–wars not over resources or land, but whose purpose was to supply humans for sacrifice.

Now, flower wars might have had a prosaic function not directly connected with religious human sacrifice, of course. Many societies use warfare as a means of disposing of populations of surplus men, who can otherwise lead to social and political unrest. In a civilization that has virtually unlimited space, that’s not a problem; in societies which are geographically bounded, it is. (Even for modern, industrialized nations.)

Still, religion unquestionably played a part. The Aztecs were bloodthirsty at least to some degree because they practiced a bloodthirsty religion, and vice versa. This, I think, indicates that a society’s moral values don’t spring entirely from what is most conducive to that society’s survival. While the things that a society must do in order to survive, and the factors that are most valuable to a society’s functioning at whatever level it finds itself, will affect that society’s religious beliefs (and those beliefs will change to some extent as the needs of the society change), there would seem to be at least some corner of a society’s moral structures that are entirely irrational and completely divorced from what would best serve that society. The Aztecs may be an extreme example of this.

So what does that mean to a post-scarcity society?

It means that a post-scarcity society, even though it has no need of war or conflict, may still have both war and conflict, despite the fact that they serve no rational role. There is no guarantee that a post-scarcity society necessarily must be a rationalist society; while reaching the point of post scarcity does require rationality, at least in the scientific and technological arts, there’s not necessarily any compelling reason to assume that a society that has reached that point must stay rational.

And a post=scarcity society that enshrines irrational beliefs, and has contempt for the value of human life, would be a very scary thing indeed. Imagine a society of limitless wealth and technological prowess that has a morality based on a literalistic interpretation of Leviticus, for instance, in which women really are stoned to death if they aren’t virgins on their wedding night. There wouldn’t necessarily be any compelling reason for a post-scarcity society not to adopt such beliefs; after all, human beings are a renewable resource too, so it would cost the society little to treat its members with indifference.

As much as I love the Culture (and the idea of post-scarcity society in general), I don’t think it’s a given that they would be Utopian.

Perhaps as we continue to advance technologically, we will continue to domesticate ourselves, so that the idea of being pointlessly cruel and warlike would seem quite horrifying to our descendants who reach that point. But if I were asked to make a bet on it, I’m not entirely sure which way I’d bet.