Why We’re All Idiots: Credulity, Framing, and the Entrenchment Effect

The United States is unusual among First World nations in the sense that we only have two political parties.

Well, technically, I suppose we have more, but only two that matter: Democrats and Republicans. They are popularly portrayed in American mass media as “liberals” and “conservatives,” though that’s not really true; in world terms, they’re actually “moderate conservatives” and “reactionaries.” A serious liberal political party doesn’t exist; when you compare the Democratic and Republican parties, you see a lot of across-the-board agreement on things like drug prohibition (both parties largely agree that recreational drug use should be outlawed), the use of American military might abroad, and so on.

A lot of folks mistakenly believe that this means there’s no real differences between the two parties. This is nonsense, of course; there are significant differences, primarily in areas like religion (where the Democrats would, on a European scale, be called “conservatives” and the Republicans would be called “radicalists”); social issues like sex and relationships (where the Democrats tend to be moderates and the Republicans tend to be far right); and economic policy (where Democrats tend to be center-right and Republicans tend to be so far right they can’t tie their left shoe).

Wherever you find people talking about politics, you find people calling the members of the opposing side “idiots.” Each side believes the other to be made up of morons and fools…and, to be fair, each side is right. We’re all idiots, and there are powerful psychological factors that make us idiots.


The fact that we think of Democrats as “liberal” and Republicans as “conservative” illustrates one ares where Republicans are quite different from Democrats: their ability to frame issues.

The American political landscape for the last three years by a great deal of shouting and screaming over health care reform.

And the sentence you just read shows how important framing is. Because, you see, we haven’t actually been discussing health care reform at all.

Despite all the screaming, and all the blogging, and all the hysterical foaming on talk radio, and all the arguments online, almost nobody has actually read the legislation signed after much wailing and gnashing into law by President Obama.

And if you do read it, there’s one thing about it that may jump to your attention: It isn’t about health care at all. It barely even talks about health care per se. It’s actually about health insurance. It provides a new framework for health insurance legislation, it restricts health insurance companies’ ability to deny coverage on the basis of pre-existing conditions, it seeks to make insurance more portable..in short, it is health insurance reform, not health care reform. The fact that everyone is talking about health care reform is a tribute to the power of framing.


In any discussion, the person who controls how the issue at question is shaped controls the debate. Control the framing and you can control how people think about it.

Talking about health care reform rather than health insurance reform leads to an image in people’s minds of the government going into a hospital operatory or a doctor’s exam room and telling the doctor what to do. Talking about health insurance reform gives rise to mental images of government beancounters arguing with health insurance beancounters about the proper way to notate an exemption to the requirements for filing a release of benefits form–a much less emotionally compelling image.

Simply by re-casting “health insurance reform” as “health care reform,” the Republicans created the emotional landscape on which the war would be fought. Middle-class working Americans would not swarm to the defense of the insurance industry and its über-rich executives. Recast it as government involvement between a doctor and a patient, however, and the tone changed.

Framing matters. Because people, by and large, vote their identity rather than their interests, if you can frame an issue in a way that appeals to a person’s sense of self, you can often get him to agree with you even if by agreeing with you he does harm to himself.

I know a woman who is an atheist, non-monogamous, bisexual single mom who supports gay marriage. In short, she hits just about every ticky-box in the list of things that “family values” Republicans hate. The current crop of Republican political candidates, all of them, have at one point or another voiced their opposition to each one of these things.

Yet she only votes Republican. Why? Because she says she believes, as the Republicans believe, that poor people should just get jobs instead of lazing about watching TV and sucking off hardworking taxpayers’ labor.

That’s the way we frame poverty in this country: poor people are poor because they are just too lazy to get a fucking job already.

That framing is extraordinarily powerful. It doesn’t matter that it has nothing to do with reality. According to the US Census Bureau, as of December 2011 46,200,000 Americans (or 15.1% of the total population) live in poverty. According to the US Department of Labor, 11.7% of the total US population had employment but were still poor. In other words, the vast majority of poor people have jobs–especially when you consider that some of the people included in the Census Bureau’s statistics are children, and therefore not part of the labor force.

Framing the issue of poverty as “lazy people who won’t get a job” helps deflect attention away from the real causes of poverty, and also serves as a technique to manipulate people into supporting positions and policies that act against their own interests.

But framing only works if you do it at the start. Revealing how someone has misleadingly framed a discussion after it has begun is not effective at changing people’s attention because of a cognitive bias called the entrenchment effect.


A recurring image in US politics is the notion of the “welfare queen”–a hypothetical person, invariably black, who becomes wealthy by living on government subsidies. The popular notion has this black woman driving around the low-rent neighborhood in a Cadillac, which she bought by having dozens and dozens of babies so that she could receive welfare checks for each one.

The notion largely traces back to Ronald Reagan, who during his campaign in 1976 talked over and over (and over and over and over and over) about a woman in Chicago who used various aliases to get rich by scamming huge amounts of welfare payments from the government.

The problem is, this person didn’t exist. She was entirely, 100% fictional. The notion of a “welfare queen” doesn’t even make sense; having a lot of children but subsisting only on welfare doesn’t increase your standard of living, it lowers it. The extra benefits given to families with children do not entirely offset the costs of raising children.

Leaving aside the overt racism in the notion of the “welfare queen” (most welfare recipients are white, not black), a person who thinks of welfare recipients this way probably won’t change his mind no matter what the facts are. We all like to believe ourselves to be rational; we believe we have adopted our ideas because we’ve considered the available information rationally, and that if evidence that contradicts our ideas is presented, we will evaluate it rationally. But nothing could be further from the truth.

In 2006, two researchers at the University of Michigan, Brendan Nyhan and Jason Reifler, did a study in which they showed people phony studies or articles supporting something that the subjects believed. They then told the subjects that the articles were phony, and provided the subjects with evidence that showed that their beliefs were actually false.

The result: The subjects became even more convinced that their beliefs were true. In fact, the stronger the evidence, the more insistently the subjects clung to their false beliefs.

This effect, which is now referred to as the “entrenchment effect” or the “backfire effect,” is very common among people in general. A person who holds a belief who is shown hard physical evidence that the belief is false comes away with an even stronger belief that it is true. The stronger the evidence, the more firmly the person holds on.

The entrenchment effect is a form of “motivated reasoning.” Generally speaking, what happens is that a person who is confronted with a piece of evidence showing that his beliefs are wrong will respond by mentally going through all the reasons he started holding that belief in the first place. The stronger the evidence, the more the person repeats his original line of reasoning. The more the person rehearses the original reasoning that led him to the incorrect belief, the more he believes it to be true.

This is especially true if the belief has some emotional vibrancy. There is a part of the brain called the amygdala which is, among other things, a kind of “emotional memory center.” That’s a bit oversimplified, but essentially true; when you recall a memory that has an emotional charge, the amygdala mediates your recall of the emotion that goes along with the memory; you feel that emotion again. When you rehearse the reasons you first subscribed to your belief, you re-experience the emotions again–reinforcing it and making it feel more compelling.

This isn’t just a right/left thing, either.

Say, for example, you’re afraid of nuclear power. A lot of people, particularly self-identified liberals, are. If you are presented with evidence that shows that nuclear power, in terms of human deaths per terawatt-hour of power produced, is by far the safest of all forms of power generation, it is unlikely to change your mind about the dangers of nuclear power one bit.

The most dangerous form of power generation is coal. In addition to killing tens of thousands of people a year, mostly because of air pollution, coal also releases quite a lot of radiation into the environment. This radiation comes from two sources. First, some of the carbon that coal is made of is in the naturally occurring radioactive isotope carbon-14; when the coal is burned, this combines with oxygen to produce radioactive gas that goes out the smokestack. Second, coal beds contain trace amounts of radioactive uranium and thorium, which remain in the ash when it’s burned; coal plants consume so much coal–huge freight trains of it–that the resulting fly ash left over from burning those millions of tons of coal is more radioactive than nuclear waste. So many people die directly or indirectly as a result of coal-fired power generation that if we had a Chernobyl-sized meltdown every four years, it would STILL kill fewer people than coal.

If you’re afraid of nuclear power, that argument didn’t make a dent in your beliefs. You mentally went back over the reasons you’re afraid of nuclear power, and your amygdala reactivated your fear…which in turn prevented you from seriously considering the idea that nuclear might not be as dangerous as you feel it is.

If you’re afraid of socialism, then arguments about health reform won’t affect you. It won’t matter to you that health care reform is actually health insurance reform, or that the supposed “liberal” health care reform law was actually mostly written by Republicans (many of the health insurance reforms in the Federal package are modeled on similar laws written by none other than Mitt Romney; the provisions expanding health coverage for children were written by Republican senator Orrin Hatch (R-Utah); and the expansion of the Medicare drug program were written by Republican Representative Dennis Hastert (R-Illinois)), or that it’s about as Socialist as Goldman-Sachs (the law does not nationalize hospitals, make doctors into government employees, or in any other way socialize the health care infrastructure). You will see this information, you will think about the things that originally led you to see the Republican health-insurance reform law as “socialized Obamacare,” and you’ll remember your emotional reaction while you do it.

Same goes for just about any argument with an emotional component–gun control, abortion, you name it.

This is why folks on both sides of the political divide think of one another as “idiots.” That person who opposes nuclear power? Obviously an idiot; only an idiot could so blindly ignore hard, solid evidence about the safety of nuclear power compared to any other form of power generation. Those people who hate Obamacare? Clearly they’re morons; how else could they so easily hang onto such nonsense as to think it was written by Democrats with the purpose of socializing medicine?

Clever framing allows us to be led to beliefs that we would otherwise not hold; once there, the entrenchment effect keeps us there. In that way, we are all idiots. Yes, even me. And you.

Transhumanism, Technology, and the da Vinci Effect

[Note: There is a followup to this essay here]

Ray Kurzweil pisses me off.

His name came up last night at Science Pub, which is a regular event, hosted by a friend of mine, that brings in guest speakers on a wide range of different science and technology related topics to talk in front of an audience at a large pub. There’s beer and pizza and really smart scientists talking about things they’re really passionate about, and if you live in Portland, Oregon (or Eugene or Hillsboro; my friend is branching out), I can’t recommend them enough.

Before I can talk about why Ray Kurzweil pisses me off–or, more precisely, before I can talk about some of the reasons Ray Kurzweil pisses me off, as an exhaustive list would most surely strain my patience to write and your patience to read–it is first necessary to talk about what I call the “da Vinci effect.”


Leonardo da Vinci is, in my opinion, one of the greatest human beings who has ever lived. He embodies the best in our desire to learn; he was interested in painting and sculpture and anatomy and engineering and just about every other thing worth knowing about, and he took time off of creating some of the most incredible works of art the human species has yet created to invent the helicopter, the armored personnel carrier, the barrel spring, the Gatling gun, and the automated artillery fuze…pausing along the way to record innovations in geography, hydraulics, music, and a whole lot of other stuff.

However, most of his inventions, while sound in principle, were crippled by the fact that he could not conceive of any power source other than muscle power. The steam engine was still more than two and a half centuries away; the internal combustion engine, another half-century or so after that.

da Vinci had the ability to anticipate the broad outlines of some really amazing things, but he could not build them, because he lacked one essential element whose design and operation were way beyond him or the society he lived in, both in theory and in practice.

I tend to call this the “da Vinci effect”–the ability to see how something might be possible, but to be missing one key component that’s so far ahead of the technology of the day that it’s not possible even to hypothesize, except perhaps in broad, general terms, how it might work, and not possible even to anticipate with any kind of accuracy how long it might take before the thing becomes reachable.


Charles Babbage’s Difference Engine is another example of an idea whose realization was held back by the da Vinci effect.

Babbage reasoned–quite accurately–that it was possible to build a machine capable of mathematical computation. He also reasoned that it would be possible to construct such a machine in such a way that it could be fed a program–a sequence of logical steps, each representing some operation to carry out–and that on the conclusion of such a program, the machine would have solved a problem. Ths last bit differentiated his conception of a computational engine from other devices (such as the Antikythera mechanism) which were built to solve one particular problem and could not be programmed.

The technology of the time, specifically with respect to precision metal casting, meant his design for a mechanical computer was never realized in his lifetime. Today, we use devices that operate by principles he imagined every day, but they aren’t mechanical; in place of gears and levers, they use gates that control the flow of electrons–something he could never have envisioned given the understanding of his time.


One of the speakers at last night’s Science Pub was Dr. Larry Sherman, a neurobiologist and musician who runs a research lab here in Oregon that’s currently doing a lot of cutting-edge work in neurobiology. He’s one of my heroes1; I’ve seen him present several times now, and he’s a fantastic speaker.

Now, when I was in school studying neurobiology, things were very simple. You had two kinds of cells in your brain: neurons, which did all the heavy lifting involved in the process of cognition and behavior, and glial cells, which provided support for the neurons, nourished them, repaired damage, and cleaned up the debris from injury or dead cells.

There are a couple of broad classifications for glial cells: astrocytes and microglia. Astrocytes, shown in green in this picture, provide a physical scaffold to hold neurons (in blue) in place. They wrap the axons of neurons in protective sheaths and they absorb nutrients and oxygen from blood vessels, which they then pass on to the neurons. Microglia are cells that are kind of like little amoebas; hey swim around in your brain locating dead or dying cells, pathogens, and other forms of debris, and eating them.

So that’s the background.


Ray Kurzweil is a self-styled “futurist,” transhumanist, and author. He’s also a Pollyanna with little real rubbber-on-road understanding of the challenges that nanotechnology and biotechnology face. He talks a great deal about AI, human/machine interface, and uploading–the process of modeling a brain in a computer such that the computer is conscious and aware, with all the knowledge and personality of the person being modeled.

He gets a lot of it wrong, but it’s the last bit he gets really wrong. Not the general outlines, mind you, but certainly the timetable. He’s the guy who looks at da Vinci’s notebook and says “Wow, a flying machine? That’s awesome! Look how detailed these drawings are. I bet we could build one of these by next spring!”

Anyway, his name came up during the Q&A at Science Pub, and I kind of groaned. Not as much as I did when Dr. Sherman suggested that a person whose neurons had been replaced with mechanical analogues wouldn’t be a person any more, but I groaned nonetheless.

Afterward, I had a chance to talk to Dr. Sherman briefly. The conversation was short; only just long enough for him to completely blow my mind, make me believe that a lot of ideas about uploading are limited by the da Vinci effect, and to suggest that much brain modeling research currently going on is (in his words) “totally wrong”.


It turns out that most of what I was taught about neurobiology was utterly wrong. Our understanding of the brain has exploded in the last few decades. We’ve learned that people can and do grow new brain cells all the time, throughout their lives. And we’ve learned that the glial cells do a whole lot more than we thought they did.

Astrocytes, long believed to be nothing but scaffolding and cafeteria workers, are strongly implicated in learning and cognition, as it turns out. They not only support the neurons in your brain, but they guide the process of new neural connections, the process by which memory and learning work. They promote the growth of new neural pathways, and they also determine (at least to some degree) how and where those new pathways form.

In fact, human beings have more different types of astrocytes than other vertebrates do. Apparently, according to my brief conversation with Dr. Sherman, researchers have taken human astrocytes and implanted them in developing mice, and discovered an apparent increase in cognitive functions of those mice even though the neurons themselves were no different.

And, more recently, it turns out that microglia–the garbage collectors and scavengers of the brain–can influence high-order behavior as well.

The last bit is really important, and it involves hox genes.


A quick overview of hox genes. These are genes which control the expression of other genes, and which are involved in determining how an organism’s body develops. You (and monkeys and mice and fruit flies and earthworms) have hox genes–pretty much the same hox genes, in fact–that represent an overall “body image plan”. The do things like say “Ah, this bit will become a torso, so I will switch on the genes that correspond to forming arms and legs here, and switch off the genes responsible for making eyeballs or toes.” Or “This bit is the head, so I will switch on the eyeball-forming genes and the mouth-forming genes, and switch off the leg-forming genes.”

Mutations to hox genes generally cause gross physical abnormalities. In fruit flies, incoreect hox gene expression can cause the fly to sprout legs instead of antennae, or to grow wings from strange parts of its body. In humans, hox gene malfunctions can cause a number of really bizarre and usually fatal birth defects–growing tiny limbs out of eye sockets, that sort of thing.

And it appears that a hox gene mutation can result in obsessive-compulsive disorder.

And more bizarrely than that, this hox gene mutation affects the way microglia form.


Think about how bizarre that is for a minute. The genes responsible for regulating overall body plan can cause changes in microglia–little amoeba scavengers that roam around in the brain. And that change to those scavengers can result in gross high-level behavioral differences.

Not only are we not in Kansas any more, we’re not even on the same continent. This is absolutely not what anyone would expect, given our knowledge of the brain even twenty years ago.

Which brings us back ’round to da Vinci.


Right now, most attempts to model the brain look only at the neurons, and disregard the glial cells. Now, there’s value to this. The brain is really (really really really) complex, and just developing tools able to model billions of cells and hundreds or thousands of billions of interconnections is really, really hard. We’re laying the foundation, even with simple models, that lets us construct the computational and informatics tools for handling a problem of mind-boggling scope.

But there’s still a critical bit missing. Or critical bits, really. We’re missing the computational bits that would allow us to model a system of this size and scope, or even to be able to map out such a system for the purpose of modeling it. A lot of folks blithely assume Moore’s Law will take care of that for us, but I’m not so sure. Even assuming a computer of infinite power and capability, if you want to upload a person, you still have the task of being able to read the states and connection pathways of many billions of very small cells, and I’m not convinced we even know quite what those tools look like yet.

But on top of that, when you consider that we’re missing a big part of the picture of how cognition happens–we’re looking at only one part of the system, and the mechanism by which glial cells promote, regulate, and influence high-level cognitive tasks is astonishingly poorly understood–it becomes clear (at least to me, anyway) that uploading is something that isn’t going to happen soon.

We can, like da Vinci, sketch out the principles by which it might work. There is nothing in the laws of physics that suggest it can’t be done, and in fact I believe that it absolutely can and will, eventually, be done.

But the more I look at the problem, the more it seems to me that there’s a key bit missing. And I don’t even think we’re in a position yet to figure out what that key bit looks like, much less how it can be built. It may be possible that when we do model brains, the model isn’t going to look anything like what we think of as a conventional computer at all, much like when we built general-purpose programmable devices, they didn’t look like Babbage’s difference engines at all.


1 Or would be, if it weren’t for the fact that he rejects personhood theory, which is something I’m still a bit surprised by. If I ever have the opportunity to talk with him over dinner, I want to discuss personhood theory with him, oh yes.

Random psycholinguistics musings

A couple of days ago, while I was in the shower, I started thinking about an old experiment that one of my former professors had talked about in one of my linguistics classes way back in the dim days of my misspent youth.

If I recall correctly, the experiment, which was done in the 1940s or 1950s and for which I sadly don’t have a citation, was one of the endless series of attempts to ‘prove’ the superiority of whites that were so trendy back then. It involved taking random lists of numbers and asking folks of different races to memorize them.

The results seemed to fit with the racist orthodoxy of the time. Whites and Asians performed best, learning to memorize longer lists of numbers more successfully than, say, Africans.

But another researcher noticed something interesting: success at learning to memorize long lists of numbers varied not with the race of the person doing it so much as with the language of that person. In English, all of the numbers between one and ten are single syllables, except for “seven,” which has two. In Japanese (I’m told), all of the numbers between one and ten have one-syllable names. In some other languages, some of the numbers between one and ten have multiple syllables.

People’s performance on tests involving memorizing numbers varies not with the race of the person, but with the person’s native language, and more specifically with the number of syllables for the various digits in that language. whose native languages were English or Japanese outperformed people whose native language contained many terms for digits that were two or three syllables long, regardless of their race.

When we memorize a list of numbers, it seems, we’re not memorizing the shapes of the numbers or even a concept of what the numbers mean; we’re memorizing words. We rehearse the list of numbers as though we were hearing it or speaking it. (This definitely seems to be what I do; if I’m trying to remember “813-555-7123,” what I do is I say the numbers to myself: “eight one three five five five seven one two three.”)

So that got me to thinking about whether or not what psychologists and cognitive scientists call the “short-term buffer,” which is the place where we stick stuff we’re trying to remember right now, has a limited capacity in terms of syllables as well as in terms of chunks. (The notion that we easily remember lists of seven plus or minus two numbers depends on how we chunk them; I remember “1966,” the year I was born, as a single chunk, not as four digits.)

Anyway, while I was washing my hair, I started wondering if the same concept applies to things other than numbers, such as arbitrary lists of shapes. Imagine a list of shapes, laid out and named like so:

Some of these shapes have names that are one syllable long, some have two-syllable names, and some have three-syllable names. To front-load the experiment, the researcher could describe the shapes by name (to ensure that everyone was using the same names for the shapes), or could even give all the test subjects a copy of this chart.

Now, if there is a correlation between the number of elements that can be stored in short-term memory and recalled and the number of syllables that the words for those elements have, then I would expect that people would consistently do better when asked to memorize lists like dot-dot-square-grid-circle-dot-ellipse-square than lists like triangle-triangle-square-rhombus-hexagon-triangle-ellipse-square. Performance should vary not only with the length of the list but also with the number of syllables in the names of the shapes in the list.

So yeah, that’s the kind of thing that runs through my head in the morning. Anyone want to fund me?

Science is hard

I have two sweeties in school right now pursuing postgraduate degrees related in some way to neuroscience, brain mapping, or brain modeling.

Brain mapping is hard. Really, really, really hard.

It’s not just that there’s a lot of neurons in the brain (though there are–about 100 billion1 or so). It’s not just that they’re wired together in beastly complicated ways, though that, too, is true.

It’s that “beastly complicated” doesn’t even begin to cover it.

This is a drawing of a type of brain cell called a Purkinje cell, taken from a 1918 copy of Gray’s Anatomy. 1918! We’ve known about these things for a long time:

There are a lot of these in your brain, mostly in your voluntary motor control areas. A single Purkinje cell has one axon, which is basically a nerve cell’s output, and as many as 200,000 dendrites, which are basically a nerve cell’s input. Purkinje cells regulate motor control, primarily by inhibiting other neurons from firing. All your motor control is mediated by these brain cells. They’re also hooked into “climbing fibers,” axons from other neurons which pass from the center parts of your brain outward.

At rest, these guys fire regularly, sending inhibitory signals to neurons deeper down. When activated, they fire much more rapidly, more strongly inhibiting downstream neurons. All well and good, but…

…a single Purkinje cell can have two hundred thousand inputs. Read that again so that the pure horror has time to sink in. A single Purkinje cell can have two hundred thousand inputs.

So, if you were to, say, want to map a person’s brain, that would basically mean recording each brain cell and a list of all the other brain cells it links to. If you had 100 brain cells and each one could link to one other cell, you’d have, potentially, 100 links to record. If you had 100 brain cells and each one could link to 10 other cells, you’d have 100 times 10, or 1,000, links to record. If you had 100 brain cells and each could link to 20 other cells, you’d have 100 times 20 links to record. Makes sense, right?

And if you have 100 billion cells, and each cell can link to 200,000 other cells, you have 100,000,000,000 times 200,000 links to record.

This is a really, really, really big number. This is the kind of number that’s within the same order of magnitude as the number of grains of sand on the entire freaking planet. Imagine tagging, isolating, and recording the relative position of every freaking grain of sand on the entire freaking planet and you’ll start to gain an appreciation of the magnitude of the challenge involved in mapping a human brain.

Even your own DNA doesn’t record this information–it can’t. If you were to dedicate the entire information storage capacity of the entire human genome just to mapping the connections between all your brain cells, you’d fall short by several orders of magnitude. The process of building a brain is dynamic; your DNA only describes the gross physical structure, and then as your brain forms it wires itself up more or less randomly2. That’s why it takes such a long time to make a human brain–a process that isn’t really finished ’til you’re out of puberty3.

Which is very depressing, when you consider just how valuable that model will be. And makes my sweeties all the more amazing, I think.


1 American billion (1×109), not British billion (1×1012).

2 Well, not really randomly,, but not deterministically according to a blueprint either. Each nerve cell sends out dendrites, which hook up with whatever nearby nerve cells they happen to hook up with–a neuron that fails to hook up to any other neurons typically dies. The direction and number of dendrites are determined, in general ways, by your genes, but the specific connections that get made are not. And these connections remain dynamic throughout your entire life; long term memory, for example, appears to be encoded in patterns of connections.

3 Interestingly, most of the late-stage development, that takes place during and just after puberty, is inhibitory. Kinda explains a lot, doncha think?

“I don’t care what you are. I care what you DID.”

So last night I was reading my friends list, and ran into the video I’ve posted below on drjon‘s journal.

Now, this video is about racism, but touches on a really important idea that I think extends way, way beyond conversations about race. On the subject of racism itself, I have little to add beyond what the video already says, so I’ll leave that alone.

The video is by a guy who calls himself Jay Smooth. He has a Web site and a YouTube channel, and he’s articulate and smart and funny and before you know it I’d been sucked down the Intertubes and had wasted two hours watching all his stuff.

So thanks, drjon, that’s two hours I’ll never have back.

Anyway, the video is short and is worth watching, and I’ll put it here so you can see what I’m talking about before I move on to the point that extends beyond racism and race.

The distinction between “what he did” and “what he is” is important. It’s something that trips us up as human beings all the time. It’s the thin edge of the wedge that leads to mind-reading behavior, false assumptions, broken expectations, and all manner of other ills that plague us. And it’s a really, really easy mistake to make.


Human beings are a storytelling species. We tell ourselves stories all the time, every day, without even being aware of it. These stories help us to try to make sense of the actions of other people. Indeed, we even invent stories that we tell ourselves in order to explain our own behavior, as vividly illustrated in one famous series of studies of people whose corpus callosum had been split.

A quick recap for folks who are not neurology geeks: The corpus callosum is a thick bundle of nerves that connects the left and right hemispheres of the brain. If this is damaged or cut, as used to be done to treat a certain kind of epilepsy, the hemispheres can’t communicate directly with each other. Each hemisphere controls one-half of the body and sees one-half of the visual field, but language usually exists only in one hemisphere, not both; when the corpus callosum is cut, it’s almost like you have two different brains in one body, but only one of the two can talk.

Scientists have had a ball studying folks like this; it’s great fun. One common experiment involved showing things designed to provoke a reaction to the right hemisphere, which usually lacks language, then asking the person why he was reacting the way he did; the left hemisphere had no clue what the right hemisphere was seeing, but the person would nevertheless offer up all kinds of stories to explain his reaction. An even better experiment involved showing different images to the two hemispheres, such as a snowbank to the right hemisphere and a chicken to the left hemisphere, and then asking the person to point with his left hand at an object relevant to the thing he was seeing. The right hemisphere controls the left hand, so the right hemisphere, which was seeing an image of a snow bank, would point to a snow shovel. The left hemisphere, which was seeing a chicken, had absolutely not the foggiest idea why he was pointing to the shovel, but when he was asked “Why did you point to a shovel?” he’d say “Well, because I see a chicken, and you need to use a shovel to clean up chicken manure.”

In other words, he invented a story that was total fabrication to explain his own actions, without even being aware that he was inventing a story.


We all do this, all the time, and unless we guard against it, it can really distort our perceptions of other people. Every time we say “So-and-so did this because so and so is a ___”, we’re falling into this trap.

The fact is, unless we are mind readers (or unless someone actually explicitly says why he did something), our stories about other people’s motivations are just that–stories. We fabricate these stories based on our own projections and our own ideas.

Worse, we’re not even fair about it.

In the book How We Know What Isn’t So: The Fallibility of Human Reason in Everyday Life, Thomas Gilovich talks about the self-serving nature of the stories we tell. Sociologists love this stuff, and (naturally) have done a number of experiments illustrating this, by asking people who’ve done something why they did it, and then asking people who are watching someone else do the exact same thing why that person did it.

Invariably, people will offer situational explanations for their own behavior–“I did it because of the situation I was in”–but will offer personal explanations for other people’s behavior–“He did it because he is a worthless, good-for-nothing bastard who doesn’t care about me.”

For example, we’ve all cut someone off in traffic, and we’ve all seen someone cut us off in traffic. If you ask a person “Why did you just cut that guy off?” the person will probably offer you a situational explanation, like “The sun was in my eyes, and all the glare on the windshield made it impossible for me to see him.” But if you ask that exact same person “Why did that driver just cut you off in traffic?” that person will probably say “Because he is a reckless, careless idiot who doesn’t give a damn about anyone else on the road.”

In other words, to get back to the video, people don’t talk about what that other driver did, they talk about what that other driver is.


That’s a dangerous road to walk down, talking about what other people are. Projections of the motivations of others can get you in trouble fast.

But we do it all the time. And it’s not just with other drivers; we do it in politics, in relationships, everywhere.

“You voted for McCain because you’re a religious zealot who wants to see the government overthrown and replaced with a totalitarian militant theocracy.” “Oh, yeah? You voted for Obama because you’re an anti-capitalist tree-hugger who wants to destroy private enterprise!” This is what happens when we think we can tell what people are by looking only at what they did, and it’s an embarrassment.

Now, yes, there are right-wing religious zealots who want to overthrow the American government and replace it with a religious theocracy, and they probably did vote for McCain. And there are anti-capitalist left-wingers who want to destroy free enterprise, and they probably voted Obama. But assuming that you can peek into someone’s head and ascertain their motives just from this is kinda silly. Especially when you yourself had much more rational reasons for whatever vote you cast, right?

The sun was in your eyes, but that other guy is a jerk. Same thing.


My sweetie figmentj and I even talked about this recently. It can be very difficult to separate what a person does from what that person is even when that person is a close friend or a lover, and failing to do so can certainly add to unnecessary pain. “You don’t call me because you are indifferent to me” is very different from “you don’t call me because you don’t like talking on the phone,” and the former is much more hurtful than the latter. While it’s true that a person’s priorities are often reflected in their behavior, and it’s also true that a person who doesn’t care about you is in fact unlikely to call, there’s a long leap from that to “because you didn’t call, you don’t care.” (In fact, the train of thought that goes “A person who doesn’t care about me won’t call me; you are a person who doesn’t call me; ergo, you don’t care about me” is a problem in its own right, because it commits the fallacy of affirming the consequent. Devilishly slippery, this stuff is.)

And it presents itself in other ways, too. “My lover just checked out that hottie who walked into the store. That means my lover is a faithless bastard who doesn’t really love me!” The stories we tell sometimes say more about our own internal fears and insecurities than about the person we’re telling them about.

So, yeah. It’s about what people do, not about what people are. And if you want to change what people do, the best way to do this is to keep the conversation away from what they are.

Biochemistry and sex…and hey, multiple orgasms!

A few days ago, someone on my flist posted something that had a casual mention of a drug that is used to cause lactation. I don’t remember who it was, or what the post was actually about, see, but I ended up getting sucked down the Intertubes for hours because if ot, and it was some hours before I re-surfaced in the middle of a lake many miles away.

Lactation in human beings is largely mediated by a hormone called, naturally enough, “prolactin.” But that’s not the interesting bit. The interesting bit is about sex.

This is prolactin. It’s a hormone produced by human beings in the breast during breast feeding (it causes the production of milk) and in the brain during orgasm. As is typical with many hormones, it serves double duty and has a number of different roles; evolutionary biology never starts with a clean slate, so we get hormones in one part of the body repurposed to do something completely different in another part of the body (and we also get fucked-up design night mares like the knee…but I digress).

Its role in the brain is interesting. it’s what keeps you from wanting to fuck all the time.

When (most) people have an orgasm, there’s a drop in sexual arousal immediately afterward. There’s usually a refractory period, during which you can’t get off again, and there’s a generalized, overall decrease in libido. The length of time it lasts varies all over the map; for some folks it’s a few minutes, for other folks it’s the rest of the day, or at least until the rerun of “Buffy the Vampire Slayer” is over. Prolactin is the cause.

When it’s released in the brain during and after orgasm, the role of prolactin is to stomp all over your arousal like it was a narc at a biker rally. A while ago, a bunch of scientists far better at getting funded than I am worked out a way to get paid for watching people masturbate; they found some heroic volunters, hooked them up to blood sampling equipment, then monitored the levels of various hormones in their blood while the volunteers masturbated to orgasm. The experiment was repeated with volunteers who could experience multiple orgasms.

What they found, aside from the fact that getting paid to watch women masturbate is really hot, is that the production of prolactin is directly correlated to the post-orgasmic crash; the prolactin remains in the body for hours (or longer); while the level of prolactin is high, arousal is difficult or impossible; and people who have multiple orgasms don’t have this spike in prolactin in their blood after they get off.

All this, I already knew.


Being the transhumanist that I am, which is often just a way of saying being the pragmatist that I am, I’ve long thought that the easiest path to becoming multiply orgasmic would probably be to develop a drug that blocks the action of prolactin. Snap, job done. Take a pill, get off again and again and again and again. And then some more after that.

What I didn’t realize was that such drugs already exist.

So here I am, reading LJ, and I find a passing reference to a drug that induces lactation. Since I hadn’t heard of it before, I do what I always do with novel words or ideas–I consulted the Oracle at Google.

The Oracle at Google is wise and all-knowing, but she can also be a temperamental and difficult oracle, for she often sows her information with the seeds of more things you didn’t know, which in turn lead to more things you didnt know, and still more things you didn’t know, inducing you to submerge yourself in the waters of human knowledge and not come up for air until you’re reading about the history of Hadrian’s Wall when all you’d asked for was perhaps the best ways to trim a cat’s claws.

Anyway, lactation can be induced in women by means of drugs that enhance the action of prolactin, or that stimulate prolactin production. Lactation can also be prevented, naturally enough, by drugs which block the effects of prolactin, of which there are two, cabergoline and bromocriptine.

Now, there are a lot of other reasons why you might want to block prolactin, which have nothing to do with lactation. Excess prolactin is responsible for a number of other conditions; certain forms of pituitary disease cause excess levels of prolactin, which can lead to cancers, arthritis and other autoimmune diseases, and a whole host of other stuff you don’t want. So there’s a medical need for drugs that block prolactin.

As it turns out, there’s a relationship between prolactin and a completely different compound, the neurotransmitter dopamine. Dopamine also serves multiple functions. It’s the neurotransmitter that signals nerves in your voluntary motor centers of your brain; when you think about moving your arm, your motor centers produce dopamine, which turns into the nerve impulses that make your arm actually move.

It’s also a key component of the so-called “reward center” of the brain that mediates feelings of pleasure; when you delight in anything from a beautiful painting to the knowledge that you’re getting paid to watch people masturbate, dopamine is the reason. And dopamine mediates much of the sexual system of the brain, including the functions that cause physical arousal.

Dopamine and prolactin are mutually antagonistic. Dopamine tends to inhibit the function and production of prolactin, and excess prolactin tends to inhibit the function of dopamine. For that reasons, things that are antagonistic to prolactin tend to enhance the function or quantity of dopamine in the brain, and vice-versa.

Okay, so here’s where things get really cool.


There is a devastating disease called Parkinson’s disease which results in gradual, irreversible destruction of the dopamine-producing cells in the motor area of the brain, which leads to gradual, creeping paralysis. Because it’s caused by the loss of dopamine-producing cells, anything which acts to stimulate the production of dopamine in the brain will tend to reverse the paralysis, so dopamine-enhancing drugs are often used to treat Parkinson’s.

Now, as I’ve already mentioned, drugs that block prolactin tend to enhance dopamine, and vice versa. The drug bromocriptine is a prolactin antagonist and a dopamine agonist; for that reason, it’s often used to treat both Parkinson’s disease and certain pituitary disorders that cause excess prolactin production. The down side is that it has a number of fairly nasty side effects in some people, including such unpleasantness as psychosis.

Cabergoline is another drug that works the same way as bromocriptine; like bromocriptine, cabergoline is used to treat Parkinson’s disease and pituitary disease. It, too, blocks prolactin and enhances dopamine, and it has fewer nasty side effects.

One interesting side effect reported in both men and women being treated for things like Parkinson’s is multiple orgasms.

Which is a hell of a side effect, if you ask me.

In fact, cabergoline (and, to a lesser extent, bromocriptine) are sometimes prescribed off-label to counteract the sexual side effects of antidepressants (which modify the action of dopamine), and as treatments for sexual dysfunction.

So it turns out, as is often the case, that not only was I right in thinking that a prolactin-blocking drug might allow folks to have multiple orgasms, but that, as usual, other folks had already beaten me to the punch.

The moral lesson here is to be careful what you write about in your LiveJournal. The simple mention of an unfamiliar word can suck someone down into the bowels of the Internet for hours on end, and not only that, can spread viral-like through LiveJournal psts to other folks, who may get sucked down for hours on end plumbing the depths of biochemistry or stellar nucleosynthesis, as this post in shiva-kun‘s journal so aptly shows. In the interests of getting things done in the office, I hereby ask that all the folks on my friends list refrain from posting anything interesting, and instead confine themselves to discussions of reruns of “Friends” for the next three days, kay?

Some thoughts on complexity and human consciousness

A couple weeks ago, I decided to take out the trash. On the way to the trash can, I thought, “I should clean out the kitty litter.” Started to clean the litterbox, and thought, “No, actually, I should completely change the litter.” Started changing the litter, then realized that the cat had dragged some of it out on the floor. “Ah, I should get out the vacuum,” thought I.

Next thing you know, I’m totally cleaning the apartment, one end to the other.

On my way out to the dumpster, I started thinking about hourglasses. And that’s really what this post is about.


If you have ever watched the sand falling in an hourglass, you know how it goes. The sand in the bottom of the hourglass builds up and up and up, then collapses into a lower, wider pile; then as more sand streams down, it builds up and up and up again until it collapses again.

I don’t think any reasonable person would say that a pile of sand has consciousness or free will. It is a deterministic system; its behavior is not random at all, but is strictly determined by the immutable actions of physical law.

Yet in spite of that, it is not predictable. We can not model the behavior of the sand streaming through the hourglass and predict exactly when each collapse will happen.

This illustrates a very interesting point; even the behavior of a simple system governed by only a few simple rules can be, at least to some extent, unpredictable. We can tell what the sand won’t do–it won’t suddenly start falling up, or invade France–but we can’t predict past a certain limit of resolution what it will do, in spite of the fact that everything it does is deterministic.

The cascading sequence of events that started with “I should take out the trash” and ended with cleaning the apartment felt like a sudden, unexpected collapse of my own internal motivational pile of sand. And that led, as I carried bags of trash out to the dumpster, to thoughts of unpredictable deterministic systems, and human behavior.


The sand pouring through the hourglass is an example of a Lorenz system. Such a system is a chaotic system that’s completely deterministic, yet exhibits very complex behavior that is exquisitely sensitive to initial conditions. If you take just one of the grains of sand out of the pile forming in the bottom of the hourglass, flip it upside down, and put it back where it was, the sand will now have a different pattern of collapses. There’s absolutely no randomness to it, yet we can’t predict it because predicting it requires modeling every single action of every single individual grain, and if you change just one grain of sand just the tiniest bit, the entire system changes.

Now, the human brain is an extraordinarily complex system, much more complex both structurally and organizationally than a pile of sand, and subject to more complex laws. It’s also reflexive; a brain can store information, and its future behavior can be influenced not only by its state and the state of the environment it’s in, but also by the stored memories of past behavior.

So it’s no surprise that human behavior is complex and often unpredictable. But is it deterministic? Do we actually have free will, or is our behavior entirely determined by the operation of immutable natural law, with neither randomness nor deviance from a single path dictated by that immutable natural law.

We really like to believe that we have free will, and our behavior i subject to personal choice. But is it?


In the past, some Protestant denominations believed in pre-ordination, the notion that our lives and our choices were all determined in advance by an omniscient and omnipotent god, who made our decisions for us and then cast us into hell when those decisions were not the right ones. (The Calvinist joy in the notion that some folks were pre-destined to go to hell was somewhat tempered by their belief that some folks were destined to go to heaven, but on the whole they took great delight in the idea of a fiery pit awaiting the bulk of humanity.)

The kind of determinism I’m talking about here is very different. I’m not suggesting that our paths are laid out before us in advance, and certainly not that they are dictated by an outside supernatural agency; rather, what I’m saying is that we may be deterministic state machines. Fearsomely complicated, reflexive deterministic state machines that interact with the outside world and with each other in mind-bogglingly complex ways, and are influenced by the most subtle and tiny of conditions, but deterministic state machines nonetheless. We don’t actually make choices of free will; free will appears to emerge from our behavior because it is so complex and in many ways so unpredictable, but that apparent emergent behavior is not actually the truth.

An uncomfortable idea, and one that many people will no doubt find quite difficult to swallow.

We feel like we have free will. We feel like we make choices. And more than that, we feel as if the central core of ourselves, our stream of consciousness, is not dependent on our physical bodies, but comes from somewhere outside ourselves–a feeling which is all the more seductive because it offers us a way to believe in our own immortality and calm the fear of death. And anything which does that is an attractive idea indeed.

But is it true?


Some folks try to develop a way to believe that our behavior is not deterministic without resorting to the external or the supernatural. Mathematician Roger Penrose, for example, argues that consciousness is inherently dependent on quantum mechanics, and quantum mechanics is inherently non-deterministic. (I personally believe that his arguments amount to little more than half-baked handwaving, and that he has utterly failed to make a convincing, or even a plausible, argument in favor of any mechanism whatsoever linking self-awareness to quantum mechanics. To me, his arguments seem to come down to “I really, really, really, really want to believe that human beings are not deterministic, but I don’t believe in souls. See! Look over there! Quantum mechanics! Quantum mechanics! Chewbacca is a Wookie!” But that’s neither here nor there.)

Am I saying that the whole of human behavior is absolutely deterministic? No; there’s not (yet) enough evidence to support such an absolute claim. I am, however, saying that one argument often used to support the existence of free will–the fact that human being sometimes behave in surprising and unexpected ways that are not predictable–is not a valid argument. A system, even a simple system, can behave in surprising and unpredictable ways and still be entirely deterministic.


Ultimately, it does not really matter whether human behavior is deterministic or the result of free will. In many cases, humans seem to be happier, and certainly human society seems to function better, if we take the notion of free will for granted. In fact, and argument can be made that social systems depend for their effectiveness on the premise that human beings have free will; without that premise, ideas of legal accountability don’t make sense. So regardless of whether our behavior is deterministic or not, we need to believe that it is not in order for the legal systems we have made to be effective in influencing our behavior in ways that make our societies operate more smoothly.

But regardless of whether it’s important on a personal or a social level, I think the question is very interesting. And I do tend to believe that all the available evidence does point toward our behavior being deterministic.

And yes, this is the kind of shit that goes on in my head when I take out the trash. In fact, that’s a little taste of what it’s like to live inside my head all the time. I had a similar long chain of musings and introspections when I walked out to my car and saw it covered with pollen, which I will perhaps save for another post.

Your daily dose of teh ky00t

This is Liam.

Liam is cursed with that same irresistible urge that gave us hairless naked apes the iPod, the steam engine, and nearly complete domination over all the earth: curiosity. If I place a box anywhere in my apartment, even if it’s simply a bottled water box that is set to go out with the trash, Liam will not rest until he has been over, under, around, and through it. He’s compelled, you see. He loves novelty, and he wants to know what it’s all about.

He’ll usually sleep in any box I put on or near the floor, at least for a few days. When it ceases to be novel and interesting, he grows tired of it and returns to sleeping at the foot of the bed with me. Like us naked apes, he’s curious and also fickle in his attentions.


Curiosity is a pretty sophisticated trait for an animal whose brain is smaller than my fist and not very wrinkly. In terms of raw processing power, a dozen Liams put together would compare pretty poorly to an IBM Blue Gene/L supercomputer, a much more computtionally powerful, yet singularly uncurious, piece of equipment.

Liam is actually pretty sophisticated in many of his behaviors. A couple weeks ago, he made a face at me.

It happened while I was eating frozen TV dinner apples. Microwave baked apples are tasty and delicious, and I make a point to eat them regularly. Five minutes in the microwave and you can have a small black plastic tray of bliss.

So there I was, sitting by my desk playing World of Warcraft and eating microwave baked apples, and Liam hopped up onto the desk and, brazen as you please, reached into my black plastic tray of bliss with his paw, hooked out a small piece of apple, brought it up to his nose, sniffed it suspiciously, licked it, and made a face at me. He shook the apple off his paw in disgust and wrinkled his nose at me.

Then he watched me eating the apples for several minutes, stole another bit of apple, sniffed at it even more suspiciously, and made another face at me.

There are many ways one might respond to this. One might say “Aww! How cute!” (And really, it was.) One might say “Hey! That’s my food! Don’t put your paw in that!” (And really, I did, though I knew even as I said it that it was pointless-an exercise more for my benefit than for the cats. We naked walking monkeys are kind of insecure in our position that way.) One might push the cat off the desk sternly. (And really, I didn’t have the heart to, because i dote on the cat so. A pushover, I am.)

Or, if one’s inclination runs that way, one might sit back and ponder the surprising degree of cognitive prowess the cat possesses.

I mean, seriously, think about it.

The cat recognized that I was eating something. We take that for granted, but there’s a lot of intellectual horsepower being brought to bear on a task of that sort. First, it means that he was able to map a projection of himself onto a projection on me well enough to be able to determine what kind of activity I was engaged in, and to recognize that it’s an activity he also engages in, despite great physical dissimilarities between us. That, at its foundation, means he was able to recognize the difference between himself and the rest of the world, and to recognize that some things in the world are more like him than other things in the world, to recognize those things when he sees them, and to recognize patterns of behavior common to he and I even as he recognized that I am distinct from him. Human babies take rather a long time to sort all this out.

Then, he was able to make an inference–namely, that what I was eating might be something e would like to eat as well. He made this inference in the absence of other cues, such as smell; he is, after all, a carnivore, and he is uninterested in a tray of baked apples just sitting by itself. (I know; I tried. What can I say? I was curious, too. He probably thinks they small like rotting plant matter.)

When he made this inference, he was able to formulate and then implement a plan of action, which shows at least a very limited ability to plan, even if only in a simple way.

When he obtained a piece of apple and decided it was just as revolting as it smells, he was then faced with a conundrum; this stuff was revolting, but clearly I was eating it (and with great gusto and no small amount of satisfaction, I might add). So he was willing to re-evaluate his original decision, and put it to the test again–something, the cynic in me begs to point out, that appears beyond the cognitive grasp of many people I know.


A couple of weeks ago, in a repeat of the I am not Sir Edmund fucking Hillary debacle that left me stranded on the balcony with a rope in my hand, Shelly went onto the porch to do some tidying up and the door locked behind her, trapping her until I came home for lunch.

Liam, in another example of cognitive dexterity (the only kind he has, I fear, as he is a stunningly clumsy cat), recognized that she was trapped, and became highly distressed and agitated. That shows empathy–the ability to map himself onto her and to respond as if he was the one in the distressing situation. He also knew that the door’s latch was to blame, and pawed and batted at it in a charming but unsuccessful bid to release her. Lack of opposing thumbs, and all that.


A Blue Gene/L system has, at very rough estimation, approximately the same processing power as a human brain. The Blue Gene/P supercomputer, currently in development, will well and truly trounce human beings in terms of processing ability. However, the architecture is very, very different. Modern computers are just really big, really complex Von Neumann machines, bound by the fact that the processing and memory are distinct entities which interact with one another in a series of discrete state changes.

A brain cell can roughly be mapped onto a transistor in the sense that it has only two discrete states, “firing” and “not firing,” but the architectural similarities pretty much end there.

Still, they are both finite state machines with memory, handwaving and nattering of Roger Penrose aside. And it is an axiom of state machines and formal language theory, which I will leave as an exercise to te reader to explore further, that any universal Turing machine, which is a finite state machine with memory, can, given sufficient memory, emulate any other universal Turing machine.

Which means that, given sufficient cleverness on our parts, it should be possible to take these wonderful brains of ours and emulate them in these crude computers of ours, without loss of fidelity.

Handwaving and nattering of Roger Penrose aside. (“Look! Consciousness is a quantum phenomenon! I don’t know anything about quantum physics, neurophysiology, consciousness, or cognitive science, but consciousness is a quantum phenomenon! I have no proof of this, so watch as I wave my hands!” But I digress.)

And, of course, when you emulate one kind of machine (yes, I said it, brains are machines, deal with it) on another kind of machine, if the host machine is sufficiently faster than the emulated machine, the emulation of the emulated machine is faster than the real thing.

Chew on that for a while.


I love Liam. He’s very sweet, and he is a constant little reminder in my life of figment_j. I continue to be impressed by the range of cognitive flexibility we take for granted, even in relatively unsophisticated animals, and I can hardly wait until we start building machines which can exhibit the same kind of cognitive skills.

We’re not there yet, but we will be soon. When IBM makes a supercomputer that has Liam’s level of cognitive prowess, the Singularity will well and truly be nigh.

Why we believe what we believe, and why that makes us gullible

Just how deep do you believe?
Will you bite the hand that feeds?
Will you chew until it bleeds?
Can you get up off your knees?
Are you brave enough to see?
Do you want to change it?

What is the purpose of the human brain? What function does it serve? Be careful; this is a trick question!

If you say “The brain is an organ of thought” or “The brain is an instrument of knowledge” or “The brain is the way we understand the world,” that’s the wrong answer. The correct answer is that the brain is an organ of survival. We have these big brains because they enabled our ancestors to survive; in that sense, they are no different from claws or fur or fangs.

And like all organs of survival, the brain was shaped by natural selection, sculpted by evolutionary pressures that favored the traits that helped our ancestors survive. The big brains we have now were molded and shaped to one purpose: to help small bands of hunter-gatherers survive.


Back in the day, when we rarely lived longer than 20 or 25 years and starvation battled with predation by other large carnivores for the number one spot in “things that killed human beings,” our brains gave us a competitive advantage. They did this in part by acting as engines of belief, allowing us to form models of the world and create beliefs about the world that gave us an advantage.

For example, an early human who observed that if he was upwind of his prey, the prey got away, but if he was downwind of his prey, he could more easily kill it formed a belief: “Staying downwind from the prey makes it more likely that the prey will not escape.”

Of course, other animals know these things instinctively. But the advantage of our big monkey brains is that we do not have to rely on instinct; we can form beliefs on the fly, as we go along, which means we can function in environments our instincts are not prepared to deal with. The brain as an organ of survival allows us to make observations and draw beliefs from these observations, and these beliefs give us a competitive advantage.


These beliefs can be immediate and concrete, such as “If I stick my hand in the fire, it will hurt.” They can make predictions about the future, such as “The sun will rise tomorrow” or “If the days grow longer and the weather grows colder, then winter is coming, and food is about to become less plentiful.” A belief can be negative, such as “If I leap from the top of this tree, I will not be able to fly.”

Having a brain optimized for forming beliefs is important if forming beliefs your survival schtick. If you think of the brain as a belief engine, which can either believe something or disbelieve it, and if you think of a particular belief as being true or false, it is easy to construct a game theory matrix describing all the possibilities, with two success modes and two failure modes:

Ideally, our brains lead us to believe things that are true, such as “A large leopard is a dangerous adversary,” and to disbelieve things that are not true, such as “I can eat rocks.” But there are two failure conditions as well: rejecting beliefs that are true, and accepting beliefs that are not.


The failure conditions have survival implications. Believing untrue things and not believing true things can both lead to disaster.

Of the two, though, believing untrue things will, in a small group of hunter-gatherers, usually cause fewer problems than not believing true things. Believing that dancing in circles three times and carrying a magic stone around with you will increase the chances of a successful hunt doesn’t really hurt anything; not believing that staying downwind from your prey is important has a significant survival penalty attached to it.

There’s a strong survival imperative, in other words, to prefer failure by believing something untrue over failure by not believing something that is true. Believing is less expensive than not believing. If a primitive hunter-gatherer eats an unfamiliar food, then becomes sick, it might not be the food that caused him to get sick–but if he believes the food makes him sick, and he’s wrong, the consequences are not too great, whereas if he does not believe the food made him sick,a nd he’s wrong, the consequences can be deadly. The guy who ate some food, got sick, and believed the food made him sick is the guy who survived; today, his descendants give their kids a measles vaccination, and when coincidentally their kids are diagnosed with autism, believe that the measles vaccination caused the autism.

From a survival standpoint, the consequences of not believing something true are worse than the consequences of believing something that is not true. Natural selection, therefore, tends to select in favor of people whose default state is to believe something rather than in favor of people whose default state is to disbelieve something.

And to confound matters further, humans are social animals. In our earliest days, when our social groups tended to number fifty or a hundred people and leopards were a serious and ongoing threat, to live alone was a death sentence. We depended on the support of others to survive.

But that support had a price. Groups, like individuals, form beliefs. To reject the beliefs of your group was to risk ostracism and death. People who questioned and challenged the beliefs of their tribe often did not survive to pass on their genes to future generations; the ones that were most likely to pass along their genes were the ones who learned to believe what the group believed, even if it was contradicted by clear and available evidence.

And those who were adept at manipulating the belief engines of others–shamans, tribal rulers who convinced others of their divine right to rule–tended to be disproportionately successful at mating and tended to control a disproportionate amount of resources, meaning they tended to pass on their genes most successfully.


The greatest invention of the human mind is not fire, or agriculture, or iron, or the steam engine, or even the splitting of the atom. From the perspective of understanding the physical world, the greatest invention of the human mind is the scientific method–the systematic, skeptical approach to claims about the way the world works.

When a scientist has an idea, he does not believe it, and he does not seek to prove it. Instead, he approaches it skeptically, and he seeks to disprove it. The more the idea resists increasingly sophisticated and vigorous attempts to disprove it, the more faith he begins to put in it. This is why any idea that is not falsifiable is not science.

A correlary of this idea is the notion that physical reality behaves the same way everywhere, for everyone. If a brick falls when it is dropped in Kansas, it also falls when it is dropped in Salt Lake City–and, importantly, it falls no matter who drops it, whether the person who drops it believes that it will fall or not. The physical world does not change itself to conform to human wishes and expectations. A claim that is made about some process that must be believed in order to be seen, such as ESP, is not science.


But skepticism is not innate. It is learned. The human brain has been shaped by natural selection not to be skeptical. It has been shaped by evolutionary pressure into a belief engine that believes things more easily than it disbelieves things. For our ancestors, the penalty for skepticism was very high; those early hominids for whom skepticism came naturally did not live long enough to pass on their genes to us. Our brains evolved to be gullible, not skeptical.


Today, we live in a cognitive and physical environment very different from that of our ancestors. But the machinery of natural selection is slow.

In the modern world, the same four states of our belief engines still apply. We are still predisposed to believe things rather than disbelieve them; and we can still believe things that are true, disbelieve things that are true, believe things that aren’t true, or disbelieve things that aren’t true:

Believing things that are true
  • Eating uncooked pork can make you sick
  • If you do not feed your pet dog, your dog will become unhappy, and eventually will die
  • Provoking a large predator may have serious consequences
  • Falling from a great height may have serious consequences
  • A speeding car can not stop instantly
Believing things that are not true
  • A pill can make your penis grow bigger
  • There is a sea monster living in a small landlocked lake in Scotland
  • Atlantis was a lost continent possessed of fabulous technology
  • Space aliens abduct people and perform experiments on them
  • Republicans favor small government; Democrats favor big government
  • There is an invisible man living in the sky who will spank you if you have sex in the wrong position
Not believing things that are true
  • The Holocaust never happened
  • Vaccination does not protect from disease
  • NASA never went to the moon
  • Evolutionary processes did not created the variety of life we can observe on this planet
  • Viruses and bacteria do not cause disease
  • The world is not more than six thousand years old
  • Americans are not obligated to pay income tax
Not believing things that are untrue
  • The world is not flat
  • You can not fly no matter how fast you flap your arms
  • There is no jolly fat man at the North Pole who hands out gifts
  • Money does not grow on trees
  • Forwarding an email to all your friends will not get Bill Gates to give you money
  • Solar eclipses are not caused by gigantic maurading dragons swallowing the sun

What does this mean in practical terms? Simple. It means that your brain has been hard-wired over hundreds of thousands of years of natural selection to make you credulous. Look at the brain as an instrument of survival, look at natural selection creating pressures to prefer the failure mode of believing that which isn’t true over the failure mode of not believing that which is true, and you end up with people hard-wired from the ground up to be gullible.

Your brain is a tool of survival that works by acting as an engine for creating beliefs. When you form a belief, you get a little squirt of pleasure that lights up the reward circuit of your brain. You’re emotionally rewarded every time you believe something.

At the same time, skepticism, and rational, analytical thought, do not come naturally. They’re not what your brain was optimized for; because of that, they are skills which must be learned, and are not innate. In fact, they feel unnatural and uncomfortable to you. Your brain gives you a reward for accepting beliefs, not for challenging them.


There is good news, however. When you introduce sapience into the mix, things change. Biology is not destiny. Your brain is optimized to make you gullible, but you do not need to be. You can train yourself to recognize that little squirt of pleasure you get when you believe something for what it is–a biological holdover from a time when adopting beliefs quickly and without skepticism had survival advantage. You can train yourself to be skeptical, even though it’s not natural for you.

And the rewards for doing so are great. In a modern world, where people want you to believe that they will transfer THE SUM OF $25,000,000 (TWO HUNDRED FIFTY MILLION US$) into your bank account from Nigeria if you give them your bank account information, where emails tell you that you need to update your credit card information or PayPal will shut you down, where people tell you that viruses and bacteria don’t cause disease and if you just order magic “balancing powder” ($360 for a 6-month supply) from their Web site you’ll never get sick, credulity is a survival disadvantage, and skepticism an advantage.

But it doesn’t come naturally. You have to work at it.

Quote of the Day

Courtesy of papertygre‘s quotefile:

“The brain is not an organ of thinking but an organ of survival, like claws and fangs. It is made in such a way as to make us accept as truth that which is only advantage. It is an exceptional, almost pathological constitution one has, if one follows thoughts logically through, regardless of consequences. Such people make martyrs, apostles, or scientists, and mostly end on the stake, or in a chair, electric or academic.”

—Albert Szent-Gyorgi

Oh. My. God.
That. Is. The. Most. Brilliant. Observation. Ever.