Why We’re All Idiots: Credulity, Framing, and the Entrenchment Effect

The United States is unusual among First World nations in the sense that we only have two political parties.

Well, technically, I suppose we have more, but only two that matter: Democrats and Republicans. They are popularly portrayed in American mass media as “liberals” and “conservatives,” though that’s not really true; in world terms, they’re actually “moderate conservatives” and “reactionaries.” A serious liberal political party doesn’t exist; when you compare the Democratic and Republican parties, you see a lot of across-the-board agreement on things like drug prohibition (both parties largely agree that recreational drug use should be outlawed), the use of American military might abroad, and so on.

A lot of folks mistakenly believe that this means there’s no real differences between the two parties. This is nonsense, of course; there are significant differences, primarily in areas like religion (where the Democrats would, on a European scale, be called “conservatives” and the Republicans would be called “radicalists”); social issues like sex and relationships (where the Democrats tend to be moderates and the Republicans tend to be far right); and economic policy (where Democrats tend to be center-right and Republicans tend to be so far right they can’t tie their left shoe).

Wherever you find people talking about politics, you find people calling the members of the opposing side “idiots.” Each side believes the other to be made up of morons and fools…and, to be fair, each side is right. We’re all idiots, and there are powerful psychological factors that make us idiots.


The fact that we think of Democrats as “liberal” and Republicans as “conservative” illustrates one ares where Republicans are quite different from Democrats: their ability to frame issues.

The American political landscape for the last three years by a great deal of shouting and screaming over health care reform.

And the sentence you just read shows how important framing is. Because, you see, we haven’t actually been discussing health care reform at all.

Despite all the screaming, and all the blogging, and all the hysterical foaming on talk radio, and all the arguments online, almost nobody has actually read the legislation signed after much wailing and gnashing into law by President Obama.

And if you do read it, there’s one thing about it that may jump to your attention: It isn’t about health care at all. It barely even talks about health care per se. It’s actually about health insurance. It provides a new framework for health insurance legislation, it restricts health insurance companies’ ability to deny coverage on the basis of pre-existing conditions, it seeks to make insurance more portable..in short, it is health insurance reform, not health care reform. The fact that everyone is talking about health care reform is a tribute to the power of framing.


In any discussion, the person who controls how the issue at question is shaped controls the debate. Control the framing and you can control how people think about it.

Talking about health care reform rather than health insurance reform leads to an image in people’s minds of the government going into a hospital operatory or a doctor’s exam room and telling the doctor what to do. Talking about health insurance reform gives rise to mental images of government beancounters arguing with health insurance beancounters about the proper way to notate an exemption to the requirements for filing a release of benefits form–a much less emotionally compelling image.

Simply by re-casting “health insurance reform” as “health care reform,” the Republicans created the emotional landscape on which the war would be fought. Middle-class working Americans would not swarm to the defense of the insurance industry and its über-rich executives. Recast it as government involvement between a doctor and a patient, however, and the tone changed.

Framing matters. Because people, by and large, vote their identity rather than their interests, if you can frame an issue in a way that appeals to a person’s sense of self, you can often get him to agree with you even if by agreeing with you he does harm to himself.

I know a woman who is an atheist, non-monogamous, bisexual single mom who supports gay marriage. In short, she hits just about every ticky-box in the list of things that “family values” Republicans hate. The current crop of Republican political candidates, all of them, have at one point or another voiced their opposition to each one of these things.

Yet she only votes Republican. Why? Because she says she believes, as the Republicans believe, that poor people should just get jobs instead of lazing about watching TV and sucking off hardworking taxpayers’ labor.

That’s the way we frame poverty in this country: poor people are poor because they are just too lazy to get a fucking job already.

That framing is extraordinarily powerful. It doesn’t matter that it has nothing to do with reality. According to the US Census Bureau, as of December 2011 46,200,000 Americans (or 15.1% of the total population) live in poverty. According to the US Department of Labor, 11.7% of the total US population had employment but were still poor. In other words, the vast majority of poor people have jobs–especially when you consider that some of the people included in the Census Bureau’s statistics are children, and therefore not part of the labor force.

Framing the issue of poverty as “lazy people who won’t get a job” helps deflect attention away from the real causes of poverty, and also serves as a technique to manipulate people into supporting positions and policies that act against their own interests.

But framing only works if you do it at the start. Revealing how someone has misleadingly framed a discussion after it has begun is not effective at changing people’s attention because of a cognitive bias called the entrenchment effect.


A recurring image in US politics is the notion of the “welfare queen”–a hypothetical person, invariably black, who becomes wealthy by living on government subsidies. The popular notion has this black woman driving around the low-rent neighborhood in a Cadillac, which she bought by having dozens and dozens of babies so that she could receive welfare checks for each one.

The notion largely traces back to Ronald Reagan, who during his campaign in 1976 talked over and over (and over and over and over and over) about a woman in Chicago who used various aliases to get rich by scamming huge amounts of welfare payments from the government.

The problem is, this person didn’t exist. She was entirely, 100% fictional. The notion of a “welfare queen” doesn’t even make sense; having a lot of children but subsisting only on welfare doesn’t increase your standard of living, it lowers it. The extra benefits given to families with children do not entirely offset the costs of raising children.

Leaving aside the overt racism in the notion of the “welfare queen” (most welfare recipients are white, not black), a person who thinks of welfare recipients this way probably won’t change his mind no matter what the facts are. We all like to believe ourselves to be rational; we believe we have adopted our ideas because we’ve considered the available information rationally, and that if evidence that contradicts our ideas is presented, we will evaluate it rationally. But nothing could be further from the truth.

In 2006, two researchers at the University of Michigan, Brendan Nyhan and Jason Reifler, did a study in which they showed people phony studies or articles supporting something that the subjects believed. They then told the subjects that the articles were phony, and provided the subjects with evidence that showed that their beliefs were actually false.

The result: The subjects became even more convinced that their beliefs were true. In fact, the stronger the evidence, the more insistently the subjects clung to their false beliefs.

This effect, which is now referred to as the “entrenchment effect” or the “backfire effect,” is very common among people in general. A person who holds a belief who is shown hard physical evidence that the belief is false comes away with an even stronger belief that it is true. The stronger the evidence, the more firmly the person holds on.

The entrenchment effect is a form of “motivated reasoning.” Generally speaking, what happens is that a person who is confronted with a piece of evidence showing that his beliefs are wrong will respond by mentally going through all the reasons he started holding that belief in the first place. The stronger the evidence, the more the person repeats his original line of reasoning. The more the person rehearses the original reasoning that led him to the incorrect belief, the more he believes it to be true.

This is especially true if the belief has some emotional vibrancy. There is a part of the brain called the amygdala which is, among other things, a kind of “emotional memory center.” That’s a bit oversimplified, but essentially true; when you recall a memory that has an emotional charge, the amygdala mediates your recall of the emotion that goes along with the memory; you feel that emotion again. When you rehearse the reasons you first subscribed to your belief, you re-experience the emotions again–reinforcing it and making it feel more compelling.

This isn’t just a right/left thing, either.

Say, for example, you’re afraid of nuclear power. A lot of people, particularly self-identified liberals, are. If you are presented with evidence that shows that nuclear power, in terms of human deaths per terawatt-hour of power produced, is by far the safest of all forms of power generation, it is unlikely to change your mind about the dangers of nuclear power one bit.

The most dangerous form of power generation is coal. In addition to killing tens of thousands of people a year, mostly because of air pollution, coal also releases quite a lot of radiation into the environment. This radiation comes from two sources. First, some of the carbon that coal is made of is in the naturally occurring radioactive isotope carbon-14; when the coal is burned, this combines with oxygen to produce radioactive gas that goes out the smokestack. Second, coal beds contain trace amounts of radioactive uranium and thorium, which remain in the ash when it’s burned; coal plants consume so much coal–huge freight trains of it–that the resulting fly ash left over from burning those millions of tons of coal is more radioactive than nuclear waste. So many people die directly or indirectly as a result of coal-fired power generation that if we had a Chernobyl-sized meltdown every four years, it would STILL kill fewer people than coal.

If you’re afraid of nuclear power, that argument didn’t make a dent in your beliefs. You mentally went back over the reasons you’re afraid of nuclear power, and your amygdala reactivated your fear…which in turn prevented you from seriously considering the idea that nuclear might not be as dangerous as you feel it is.

If you’re afraid of socialism, then arguments about health reform won’t affect you. It won’t matter to you that health care reform is actually health insurance reform, or that the supposed “liberal” health care reform law was actually mostly written by Republicans (many of the health insurance reforms in the Federal package are modeled on similar laws written by none other than Mitt Romney; the provisions expanding health coverage for children were written by Republican senator Orrin Hatch (R-Utah); and the expansion of the Medicare drug program were written by Republican Representative Dennis Hastert (R-Illinois)), or that it’s about as Socialist as Goldman-Sachs (the law does not nationalize hospitals, make doctors into government employees, or in any other way socialize the health care infrastructure). You will see this information, you will think about the things that originally led you to see the Republican health-insurance reform law as “socialized Obamacare,” and you’ll remember your emotional reaction while you do it.

Same goes for just about any argument with an emotional component–gun control, abortion, you name it.

This is why folks on both sides of the political divide think of one another as “idiots.” That person who opposes nuclear power? Obviously an idiot; only an idiot could so blindly ignore hard, solid evidence about the safety of nuclear power compared to any other form of power generation. Those people who hate Obamacare? Clearly they’re morons; how else could they so easily hang onto such nonsense as to think it was written by Democrats with the purpose of socializing medicine?

Clever framing allows us to be led to beliefs that we would otherwise not hold; once there, the entrenchment effect keeps us there. In that way, we are all idiots. Yes, even me. And you.

Science Literacy: Of Pickles and Probability

STUDY PROVES THAT PLACING A PICKLE ON YOUR NOSE GRANTS PSYCHIC POWERS

For immediate release: Scientists at the Min Planck Institute announced today that placing a pickle on your nose can improve telekinetic ability.

According to the researchers, they performed a study in which a volunteer was asked to place a pickle on her nose and then flip a coin to see whether or not the pickle would help her flip heads. The volunteer flipped the coin, which came up heads.

“This is a crowning achievement for our research,” the study’s authors said. “Our results show that having a pickle on your nose allows you to determine the outcome of a coin-toss.”

Let’s say you’re browsing the Internet one day, and you come across this report. Now, you’d probably think that there was something hinkey about this experiment, right? We know intuitively that the odds of a coin toss coming up heads are about 50/50, so if someone puts a pickle on her nose and flips a coin, that doesn’t actually prove a damn thing. But we might not know exactly how that applies to studies that don’t involve flipping coins.


So let’s talk about our friend p. This is p.

p represents the probability that a scientific study’s results are total bunk. Formally, it’s the probability that results like the ones observed could occur even if the null hypothesis is true. In English, that basically means that it represents how likely it is to get these results even if whatever the study is trying to show doesn’t actually exist at all, and so the study’s results don’t mean a damn thing.

Every experiment (or at least every experiment seeking to show a relationship between things) has a p value. In the nose-pickle experiment, the p value is 0.5, which means there is a 50% chance that the subject would flip heads even if there’s no connection between the pickle on her nose and the results of the experiment.

There’s a p value associated with any experiment. For example if someone wanted to show that watching Richard Simmons on television caused birth defects, he might take two groups of pregnant ring-tailed lemurs and put them in front of two different TV sets, one of them showing Richard Simmons reruns and one of them showing reruns of Law & Order, to see if any of the lemurs had pups that were missing legs or had eyes in unlikely places or something.

But here’s the thing. There’s always a chance that a lemur pup will be born with a birth defect. It happens randomly.

So if one of the lemurs watching Richard Simmons had a pup with two tails, and the other group of lemurs had normal pups, that wouldn’t necessarily mean that watching Mr. Simmons caused birth defects. The p value of this experiment is related to the probability that one out of however many lemurs you have will randomly have a pup with a birth defect. As the number of lemurs gets bigger, the probability of one of them having a weird pup gets bigger. The experiment needs to account for that, and the researchers who interpret the results need to factor that into the analysis.


If you want to be able to evaluate whether or not some study that supposedly shows something or other is rubbish, you need to think about p. Most of the time, p is expressed as a “less than or equal to” thing, as in “This study’s p value is <= 0.005″. That means “We don’t know exactly what the p value is, but we know it can’t be greater than one half of one percent.”

A p value of 0.005 is pretty good; it means there’s a 0.5% chance that the study is rubbish. Obviously, the larger the p value, the more skeptical you should be of a study. A p value of 0.5, like with our pickle experiment, shows that the experiment is pretty much worthless.

There are a lot of ways to make an experiment’s p value smaller. With the pickle experiment, we could simply do more than one trial. As the number of coin tosses goes up, the odds of a particular result go down. If our subject flips a coin twice, the odds of getting a heads twice in a row are 1 in 4, which gives us a p value of 0.25–still high enough that any reasonable person would call rubbish on a positive trial. More coin tosses still give successively smaller p values; the p value of our simple experiment is given (roughly) by 1/2n, where n is the number of times we flip the coin.


There’s more than just the p value to consider when evaluating a scientific study, of course. The study still needs to be properly constructed and controlled. Proper control groups are important for eliminating confirmation bias, which is a very powerful tendency for human beings to see what they expect to see and to remember evidence that supports their preconceptions while forgetting evidence which does not. And, naturally, the methodology has to be carefully implemented too. A lot goes into making a good experiment.

And even if the experiment is good, there’s more to deciding whether or not its conclusions are valid than looking at its p value. Most experiments are considered pretty good if they have a p value of .005, which means there’s a 1 in 200 chance that the results could be attributed to pure random chance.

That sounds like it’s a fairly good certainty, but consider this: That’s about the same as the odds of flipping heads on a coin 8 times in a row.

Now, if you were to flip a coin eight times, you’d probably be surprised if it landed on heads every single time.

But, if you were to flip a coin eight thousand times, it would be surprising if you didn’t get eight heads in a row somewhere in there.


One of the hallmarks of science is replicability. If something is true, it should be true no matter how many people run the experiment. Whenever an experiment is done, it’s never taken as gospel until other people also do it. (Well, to be fair, it’s never taken as gospel period; any scientific observation is only as good as the next data.)

So that means that experiments get repeated a lot. And when you do something a lot, sometimes, statistical anomalies come in. If you flip a coin enough times, you’re going to get eight heads in a row, sooner or later. If you do an experiment enough times, you’re going to get weird results, sooner or later.

So a low p value doesn’t necessarily mean that the results of an experiment are valid. In order to figure out if they’re valid or not, you need to replicate the experiment, and you need to look at ALL the results of ALL the trials. And if you see something weird, you need to be able to answer the question “Is this weird because something weird is actually going on, or is this weird because if you toss a coin enough times you’ll sometimes see weird runs?”

That’s where something called Bayesian analysis comes in handy.

I’m not going to get too much into it, because Bayesian analysis could easily make a post (or a book) of its own. In this context, the purpose of Bayesian analysis is to ask the question “Given the probability of something, and given how many times I’ve seen it, could what I’m seeing can be put down to random chance without actually meaning squat?”

For example, if you flip a coin 50 times and you get a mix of 30 heads and 20 tails, Bayesian analysis can answer the question “Is this just a random statistical fluke, or is this coin weighted?”

When you evaluate a scientific study or a clinical trial, you can’t just take a single experiment in isolation, look at its p value, and decide that the results must be true. You also have to look at other similar experiments, examine their results, and see whether or not what you’re looking at is just a random artifact.


I ran into a real-world example of how this can fuck you up a bit ago, where someone on a forum I belong to posted a link to an experiment that purports to show that feeding genetically modified corn to mice will cause health problems in their offspring. The results were (and still are) all over the Internet; fear of genetically modified food is quite rampant among some folks, especially on the political left.

The experiment had a p value of <= .005, meaning that if the null hypothesis is true (that is, there is no link between genetically modified corn and the health of mice), we could expect to see this result about one time in 200.

So it sounds like the result is pretty trustworthy…until you consider that literally thousands of similar experiments have been done, and they have shown no connection between genetically modified corn and ill health in test mice.

If an experiment’s p value is .005, and you do the experiment a thousand times, it’s not unexpected that you’d get 5 or 6 “positive” results even if the null hypothesis is true. This is part of the reason that replicability is important to science–no matter how low your p value may be, the results of a single experiment can never be conclusive.

A Taxonomy of Crackpot Ideas

Some time ago, when the anti-science, anti-evolution, religious literalist movie “Expelled” was making the rounds, it occurred to me that a strict 6-day, young-earth creationist idea of the world requires a particular confluence of perceptual filters in order to exist. There has to be an unquestioned acceptance of literalist religious dogma, a profound ignorance of some of the basic tenets of science, and a willingness to believe in a vast, orchestrated conspiracy on the part of all the world’s geologists, biologists, archaeologists, geneticists, and anthropologists in order for this notion to seem reasonable.

I’ve been chewing on that thought for a while, and looking at the perceptive filters that have to be in place to accept any number of implausible ideas, from moon hoaxers to lizard people conspiracy theories to anti-vaccinationism.

And, since making charts is something I do, I plotted some of these ideas in a Venn diagram that shows an overlapping set of prerequisites for a number of different flavors of nuttiness.

As usual, you can click on the image for an embiggened version.

How to Tell when Something Isn’t Science

The process of science–the systematic, evidence-based, rigorous, controlled exploration of the processes of the natural world–has produced an explosion of knowledge and understanding. Since the Italian Renaissance and the Abbasid period in the Persian empire, both of which saw enormous gains in scientific thinking and with them huge leaps in technology and understanding, science has been the beacon of light shining in the darkness of superstition and ignorance.

So it’s probably not too surprising that many folks who seek to embrace all sorts of non-scientific ideas try to claim that their ideas are science. Calling these ideas “science” gives them a stamp of validation. If an idea is scientific, that means it has greater legitimacy in many people’s minds.

And the world needs to cut that shit out. Not all ideas are science, yet everything from phrenology to metaphysics to “crystal energy” tries to clamber onto the scientific bandwagon.

Most recently, the cry of the pseudoscientist has become “Quantum mechanics says!” Folks who can’t actually define what quantum mechanics is are nevertheless eager to fill New Age bookstores with books that claim to “prove” that quantum mechanics validates their ideas.

So here’s a handy-dandy, more-than-pocket-sized guide that will help you tell what science actually is and is not. Ready? Here we go!

RULE 1: If it doesn’t make a precisely defined, testable, falsifiable claim, it is not science.

This is the first and most basic premise of this whole “science” business. If someone claims “Science shows us that” or “Quantum mechanics proves that” and the next thing out of their mouth isn’t a testable, falsifiable claim, then what they’re saying is probably bollocks.

Continue reading