The Apocalypse Is Coming! (…again)

In less than three weeks, the end of the world will happen.

Or, rather, in less than three weeks, a bunch of Mayan-prophesy doomsdayers will wake up and, if they have any grace at all, feel slightly sheepish.

The Mayan epic calendar is set to expire on December 21, or so it seems, and a lot of folks think this will signal the end of the world. They really, truly, sincerely believe it; some of them have even written to NASA with their concerns that a mysterious Planet X will smash into Earth on the designated date. (There seems to be some muddling of New Age thought here, as the existence of this “planet X,” sometimes called Nibiru, is a fixture amongst certain segments of the New Age population, its existence allegedly described in ancient Sumarian texts.)

It’s easy to dismiss these people as gullible crackpots, uneducated and foolish, unable to see how profoundly stupid their fears are. But I’m not so sure it’s that simple.

Apocalyptic fears are a fixture of the human condition. The Mayan doomsday nonsense is not the first such fearful prediction; it’s not even the first one to grab recent public attention. Harold Camping, an Evangelical Christian, predicted the end of the world on October 21, 2011…and also on May 21, 2011, September 7, 1994, and May 21, 1988. He got enough folks worked up about his 2011 predictions that many of his followers sold their belongings and caravanned across the country warning people of the impending Apocalypse.

These kinds of predictions have existed for, as near as I can tell, as long as human beings have had language. Pat Robertson has been in on the action, predicting the Great Tribulation and the coming of Jesus in 2007. These fears are so common that a number of conservative politicians, including Sarah Palin, believe that the current generation is the last one the world will see.

Given how deeply-woven these apocalyptic fears are in the human psyche, it seems to me they speak to something important. I believe that, at least for some people, such fears of impending doomsday actually offer protection against an even deeper fear: the fear of irrelevance.


My readership being what it is, I bet the percentage of you who recognize this picture is probably higher than the percentage of the population as a whole who recognize it.

This is part of the Standard of Ur, an artifact recovered from archaeological digs from the site of Ur, one of the world’s oldest cities, in what is now present-day Iraq.

Ur was likely first settled somewhere around 3800 BC, or roughly six thousand years ago, give or take. That puts its earliest settlement at about the start of the Bronze Age, plus or minus a century or so. The Agrarian Revolution was already well-established, but metallurgy was fairly new. When it was built, it was a coastal city; that was so long ago that the land itself has changed, and the ruins of Ur are now well inland.

You’ve probably at least heard of Ur; most public schools mention it in passing in history classes, at least back when I was a schoolkid. Unless you’re a history major, you probably don’t know much about it, and certainly don’t know a whole lot about life there. Unless you’re a history major, you probably don’t think about it a whole lot, either.

Think about that for a minute. Ur was a major center of civilization–arguably, the center of civilization–for centuries. History records it as an independent, powerful city-state in the 26th century BC, more than a thousand years after it was founded. People were born, lived, loved, struggled, rejoiced, plotted, schemed, invented, wrote, sang, prayed, fished, labored, experienced triumph and heartbreak, and died there for longer than many modern countries have even existed, and you and I, for the most part, don’t care. Most of us know more about Luke Skywalker than any of the past rulers of Ur, and that’s okay with us. We have only the vaguest of ideas that this place kinda existed at some vague point a long time ago, even though it was among the most important places in all the world for a total of more than three thousand years, if you consider its history right up to the end of the Babylonians.

And that, I think, can tell us a lot about the amazing persistence of apocalyptic doomsday fears.


When I was a kid, I was fascinated by astronomy. I wanted to grow up to be an astronomer, and even used a little Dymo labelmaker to make a label that said “Franklin Veaux, Astrophysicist” that I stuck on my bedroom door.

Then I found out that some day, the sun would burn out and the earth would become a lifeless lump of rock orbiting a small, cold cinder. And that all the other stars in the sky would burn out. And that all the stars that would come after them would one day burn out, too.

The sense of despair I felt when I learned that permanently changed me.

Think about everything you know. Think about everything you’ve ever said or done, every cause you believe in, every hero and villain you’ve ever encounter, every accomplishment you’ve ever made.

Now think about all of that mattering as much to the world as the life of an apprentice pot-maker in Ur means to you.

It’s one thing to know we are going to die; we all have to deal with that, and we construct all kinds of myths and fables, all sorts of afterlives where we are rewarded with eternal bliss while people we don’t like are forever punished for doing the things we don’t think they should do. But to die, and then to become irrelevant? To die and to know that everything we dreamed of, did, or stood for was completely forgotten, and humanity just went along without us, not even caring that we existed at all? It’s reasonable, I think, for people to experience a sense of despair about that.

But, ah! What if this is the End of Days? What if the world will cease to be in our lifetimes? Now we will never experience that particular fate. Now we no longer have to deal with the idea that everything we know will fade away. There will be no more generations a thousand or ten thousand years hence to have forgotten us; we’re it.


Just think of all the advantages of living in the End Days. We don’t have to face the notion that not only ourselves, but our ideas, our values, our morality, our customs, our traditions, all will fade away and people will get along just fine without us.

And think of the glory! There is a certain reflected glory just in being a person who witnesses an epic thing, even if it’s only from the sidelines. Imagine being in the Afterlife, and having Socrates and Einstein and Buddha saying to us, “Wow, you were there when the Final Seal was broken? That’s so cool! Tell us what it was like?”

Human nature being what it is, there’s also that satisfaction that comes from watching all the world just burn down around you. That will teach them, all those smug bastards who disagreed with us and lived their lives differently from the way we did! As fucked-up as it may be, there’s comfort in that.

Most of us, I suspect, aren’t really equipped to deal with the notion that everything we believe is important will probably turn out not to be. If we were to find ourselves transported a thousand, two thousand, ten thousand years from now, assuming human beings still exist, they will no doubt be very alien to us–as alien as Chicago would be to an ancient Sumerian.

They won’t speak our language, or anything like it; human languages rarely last more than six hundred years or more. Everything we know will be not only gone, but barely even recognized…if there’s anything left of, say, New York City, it will likely not exist much beyond an archaeological dig and some dry scholarly papers full of conjecture and misinformation. For people who live believing in tradition and hierarchy and authority and continuity, the slow and steady evaporation of all those things is worse than the idea of death. Belief in the End Times is a powerful salve to all of that.

Given the transience of all human endeavor, it makes a certain kind of sense. The alternative, after all, is…what? Cynicism? Nihilism? If everything that we see, do, think, feel, believe, fight for, and sacrifice for is going to mean as much to future generations as the lives of the citizens of Ur four thousand years ago mean to us, what’s the point of any of it? Why believe in anything?

Which, I think, misses the point.

We live in a world of seven billion people, and in all that throng, each of us is unique. We have all spent tens of billions of years not existing. We wake up in the light, alive and aware, for a brief time, and then we return to non-existence. But what matters is that we are alive. It’s not important if that matters a thousand years from now, any more than it matters that it wasn’t important a thousand years ago; it does matter to us, right here, right now. It matters because the things we believe and the things we do have the power to shape our happiness, right here, and if we can not be happy, then what is the point of this brief flicker of existence?

Why should we fight or sacrifice for anything? Because this life is all we have, and these people we share this world with are our only companions. Why should we care about causes like, say, gay rights–causes which in a thousand years will mean as much as campaigns to allow women to appear on stage in Shakespeare’s time? Because these are the moments we have, and this is the only life that we have, and for one group of people to deprive another group of people the opportunity to live it as best suits them harms all of us. If we are to share this world for this brief instant, if this is all we have, then mutual compassion is required to make this flicker of awareness worthwhile. This, ultimately, is the antidote to the never-ending stream of apocalyptic prophesy.

Some thoughts on parasites, ideology, and Malala Yousafzai

This is Malala Yousafzai. As most folks are by now aware, she is a 14-year-old Pakistani girl who was shot in the head by the Taliban for the crime of saying that girls should get an education. Her shooting prompted an enormous backlash worldwide, including–in no small measure of irony–among American politicians who belong to the same political party as legislators who say that children ought to be executed for disrespecting their parents.

I’ve been reading a lot lately about what seems to be two different and at least theoretically unrelated things: parasitology and ideology, specifically religious ideology. This might seem to have nothing to do with Malala Yousafzai’s shooting, but it really isn’t.

When I say I’ve been reading about parasitology, what I mean by that is my Canadian sweetie has been reading to me about parasitology. Specifically, she’s been reading me a book called Parasite Rex, which makes the claim that much of evolutionary biology, including the development of sexual reproduction, is driven by parasites. It’s been a lot of fun; I never knew I’d enjoy being read to so much, even though the subject matter is sometimes kinda yucky.

What’s striking to me is that these things–religious ideology and parasitology–are in some ways the same thing in two different forms.

Parasites make their living by invading a host, then using the host’s resources to spread themselves. To this end, they do some amazing manipulation of the host. Some parasites, for instance, are able to alter a host’s behavior to promote their own spread. Sometimes it’s as crude as irritating the host’s throat to promote coughing which spreads hundreds of millions of virus particles. Other times, it’s as bizarre and subtle as influencing the host’s mind to change the way the host responds to fear, in order to make it more likely that the host will be eaten by a predator, which will then infect itself with the same parasite. In fact, parasitologists today are discovering that the study of life on Earth IS the study of parasites; parasites, more than any other single factor, may be the most significant determinant in the ratio of predator to prey biomass on this planet.

Religious ideology would seem to be a long way off from parasitism, unless you consider that ideas, like parasites, spread themselves by taking control of a host and modifying the host’s behavior so as to promote the spread of the idea.

This isn’t a new concept; Richard Dawkins coined the term ‘meme’ to describe self-replicating ideas decades ago.

But what’s striking to me is how direct the comparison is. The more I learn about parasites, the more I come to believe that parasites and memes aren’t allegories for each other; parasites ARE memes, and vice versa.

We tend to think of parasites like toxoplasma as being real things, and ideas like the salvation of Jesus Christ as being abstract concepts that don’t really exist the same way that real things do. But I don’t think that’s true.

Ideas exist in physical form. It might be as a series of symbols printed in a book or as a pattern of neural connections stored inside a brain, but no matter how you slice it, ideas have a physical existence. An idea that does not exist in any physical way, even as neuron connections wired into a person’s head, doesn’t exist.

Similarly, parasites are information, just like ideas are. A strand of DNA is nothing but an encoded piece of information, in the same sense that a series of magnetic spots on a hard disk are information. In fact, researchers have made devices that use DNA molecules to store computer information, treating banks of DNA as if they were hard drives.

In a sense, ideas and organisms aren’t different things. They are the same thing written into the world in different ways. An idea that takes control of a host’s brain and modifies the host to promote the spread of the idea is like a parasite that takes control of a host and modifies it to spread the parasite. The fact that the idea exists as configurations of connections of neurons rather than as configurations of nucleotides isn’t as relevant as you might think.

We can treat ideas the same way we treat parasites or diseases. We can use the tools of epidemiology to track how ideas spread. We can map the virulence of ideas in exactly the same way that we map the virulence of diseases.

Religion is unquestionably a meme–a complex idea that is specifically designed to spread itself, sometimes at the host’s expense. A believer infected with a religious ideology who kills himself for his belief is no different than a moose infected with a parasite that dies as a result of the infection; the parasite in both cases has hijacked the host, and subverted the host’s own biological existence for its own end.

The more I see the amazing adaptations that parasites have made to help protect themselves and spread themselves, the more I’m struck by how memes, and especially religious memes, have made the same adaptations.

Some parasitic wasps, for example, will create multiple types of larva in a host caterpillar–larva that go on to be more wasps, and larva that act as guardians, protecting the host from infection by other parasites by eating any new parasites that come along. Similarly, religious memes will protect themselves by preventing their host from infection by other memes; many successful religions teach that other religions are created by the devil and are therefore evil, and must be rejected.

We see the same patterns of host resistance to parasites and to memes, too. A host species exposed to the same parasites for many generations will tend to develop a resistance to the parasites, as individuals who are particularly vulnerable to the parasites are selected against and individuals particularly resistant to the parasites are selected for by natural selection. Similarly, a virulent religious meme that causes many of its hosts to die will gradually face resistance in its host population, as particularly susceptible individuals are killed and particularly resistant individuals gain a survival advantage.

Writers like Sam Harris and Michael Shermer talk about how people in a pluralistic society can not really accept and live by the tenets of, say, the Bible, no matter how Bible-believing they consider themselves to be. The Bible advocates slavery, and executing women for not being virgins on their wedding night, and destroying any town where prophets call upon the citizens to turn away from God; these are behaviors which you simply can’t do in an industrialized, pluralistic society. So the members of modern, industrialized societies–even the ones who call themselves “fundamentalists” and who say things like “the Bible is the literal word of God”–don’t really act as though they believe these things are true. They don’t execute their wives or sell their daughters into slavery. The memes are not as effective at modifying the hosts as they used to be; they have become less virulent.

But new or mutated memes, like new parasites, always have the chance of being particularly virulent. Their host populations have not developed resistance. In the Middle East, in places where an emergent strain of fundamentalist Islam leads to things like the Taliban shooting Malala Yousafzai, I think that’s what we’re seeing–a new, virulent meme. islam itself is not new, of course, but to think that the modern strains of Islam are the same as the original is to think that the modern incarnations of Christianity are akin to the way Jesus actually lived; it’s about as far off the mark as thinking a bird is a dinosaur. They share a common heritage, but that’s all. They have evolved into very different organisms.

And this particular meme, this particular virulent strain of Islam, is canny enough to attack its host immune system directly. The Taliban targeted Malala Yousafzai because she favors education for women. Education, in many ways, provides an immunological response to memes; it is no accident that Tammy Faye Bakker famously said that it’s possible to educate yourself right out of a personal relationship with Jesus Christ. It’s no accident that Fundamentalism in all of its guises tends to be anti-intellectual and anti-education.

I’m not saying that the meme of religion (or any other meme) is inherently bad, of course. Memes have different strains; there are varieties of any large religion that are virulent and destructive to their host population, and other strains that are less virulent and more benign.

But with parasitic ideas as with parasitic biological entities, it is important to remember that the goal of the parasite is not necessarily the same as the goal of its host. Parasites attempt to spread themselves, often at the host’s expense. the parasite’s interests are not the host’s interests. Even a seemingly benign meme, such as a meme that says it is important to be nice to each other in order to gain an everlasting reward in heaven, might harm its host species if it siphons away resources to spread itself through churches that might otherwise have been used to, for example, research new cures for cancer. At the more extreme end, even such a benign meme might cause its adherents to say things like “We as a society don’t need to invest in new biomedical nanotechnology to promote human longevity, because we believe that we will live forever if we abide by the strictures of this meme and help to spread it through our works.”

Virulent memes tend to be anti-intellectual, because education is often a counter to their spread. Malala Yousafzai was targeted because she represents the development of an immune response to a virulent, destructive meme that is prevalent in the environment where she was born.

But Apple is evil! Some thoughts on how economies work

I’m still in the process of writing about my experiences with my Android phone, which will see at least two more sections (on the OS itself and on T-Mobile). The short version is I started with an iPhone, got rid of it for a 4G Android phone, decided that Android just doesn’t have it going on, and am switching back to the iPhone.

Now, one thing I’ve seen repeated many times since I’ve started talking here and elsewhere about my Android experiences is a common refrain: It’s not about the phone. It’s not about the operating system, or user experience, or call quality, or ease of use. An iPhone is a bad choice because Apple is an evil company.

With no disrespect intended for any of the dozen or so people who’ve said this to me: I find that to be a remarkably silly thing to say, but not for the reasons you might expect. I’ll get back to that in a minute.


First, before I get into that, let me start by destroying a childhood myth that we all learn in school.

When you make a product for sale, you do not determine the price of your product in the marketplace by taking the total cost of making it, adding some percentage to the cost of making it that represents your profit, and then selling it at that price.

An astonishing number of people seem to believe that this is how the price of goods is arrived at, and I am constantly surprised by how many folks believe it’s true. That isn’t the way it works at all.

When you sell a product in the marketplace, you price it at the absolute tippy-top highest price the market can bear. Then, you drive the cost of making it as low as is humanly possible, using whatever means you can. The difference between the highest price the market will bear and the lowest cost you can make them for is your profit.

Everything is priced this way: cell phones, computers, cars, winter jackets, tea, pencils, small remote-controlled toy helicopters, batteries, electric razors, suitcases, light bulbs, plywood, sofas, dishwashers (and the dishes and detergent you put into them), stereo systems, ice cream, gasoline, you name it.

“But Franklin!” I hear you cry. “What about competition? If I can get my cell phone or my ice cream from many different places, they will compete with each other on price until they have arrived at the lowest profit margin they can accept!”

Which is true, in the same world where unicorns cavort with dragon whelps over fields of cotton candy.

Yes, businesses will sometimes compete with one another on price, to a limited extent, in order to create market share. But let me let you in on a secret: It is better for me to capture only 40% of the market and make a profit of $50 a widget than to capture 90% of the market and make only $3 a widget.

Companies know this. Industries develop a sense of what their expected profit margin ought to be, and then compete on price only so long as it doesn’t erode that. The supply-demand curve they taught you in grade school? It’s rubbish. It doesn’t account for the fact that when consumers expect to pay a certain amount for something, they’ll keep paying that amount even if the cost of production falls. It doesn’t account for the fact that consumers will often rate a product as more desirable if it carries a higher price, even if the quality is exactly the same as a lower-priced item. It doesn’t account for the fact that supply and demand do not exist in a vacuum, nor for the fact that demand is not infinitely elastic, nor for the fact that demand depends on many factors, quite a few of which have nothing to do with supply.

It also doesn’t account for the fact that supply is not always responsive to demand, for reasons that may range from capitalization costs to the fact that low availability can create that air of increased desirability I just mentioned.

Even supposedly “commodity” goods like oil and wheat are not priced according to the strict laws of supply and demand; things like futures and derivatives can change their price even when supply remains exactly the same. (If there is a sudden increase in trading for oil futures, for instance, the price of oil may rise even though the production of oil is completely unchanged and the demand for oil hasn’t budged one bit.)

So when people say things like “You’re stupid to buy an iPhone; if you get a high-end model, you’re paying $100 for $20 worth of additional flash memory,” they’re speaking from a profound ignorance of how any market system works. Sorry, Mr. Savvy Consumer, but you do that same sort of thing all the time, when you buy anything from tennis shoes to lumber.


So back to Apple’s supposed “evil.”

It is deeply silly to say “I’m not going to buy an iPhone because Apple is an evil company.” Not because it’s false, but because it’s trivially true. Well, duh. Of course Apple is an evil company. Apple is ruthless, anticompetitive, and sociopathic. This is not a terribly profound insight. Yes, Apple is an evil company; in other news, the sky is up and water is wet.

Apple is an evil company because every successful multinational corporation is evil.

They have to be. The laws governing and regulating corporations pretty much guarantee that any publicly-traded corporation must be sociopathic in nature. Any company, large or small, doesn’t succeed by leaving money on the table if it doesn’t have to; public corporations are legally obligated to seek maximum return for their shareholders, by whatever means are available to them. A corporation that has the opportunity to increase revenue or lower costs and fails to do so can be sued by its shareholders.

Let’s look at Google, the company whose motto is “Don’t Be Evil.” They make an operating system that is touted as being “open,” that is supposedly “open source,” and that anyone can use to make a smartphone, right?

Right. And those unicorns in cotton candy land just love it.

The reality is rather different; Android is not really “open” in any meaningful sense of the word, and Google is as big a bully as Apple, but less public about it. Google, for example, recently forced Acer to cancel a smartphone built around a rival operating system, threatening to cut Acer off from source code and revoking Acer’s right to use Android if it didn’t comply.

You know how anyone is free to download and build the Android source code? Well, err, that applies only to older versions, and even then only to some parts of the Android code base, excluding Google’s apps that run atop it. You know how anyone can use Android on their mobile phone? Well, err, the name “Android” is trademarked, so you have to license the use fo the name from Google…and how many consumers going to buy an Android phone that’s advertised as running an “Android-like operating system”?

That gives Google considerable leverage. So much that they can tell a hardware maker “We demand you cancel your phone that uses a rival operating system” and the handset maker will comply so fast that journalists will still show up for the product launch and end up milling around an empty hall.

Yes, Apple is an evil company. Google is an evil company. Microsoft is a company of such breathtakingly creative evil that even the Department of Justice is effectively powerless to reign it in, no matter how egregiously it has broken the law. If you find yourself with warm, fuzzy feelings about any globocorp, it is only because that globocorp has paid good PR money to program you with those feelings. To believe anything else is naivety in the face of overwhelming evidence.

Those underpaid workers making iPads in Foxconn factories? They’re making gizmos for Dell and Cisco and Microsoft and HP and Motorola and Nokia and Samsung and Intel, too…and under working conditions that the folks making sneakers for Nike would give their right arm to enjoy.

Of course, not all evil is created equal. The evil of Google and Apple might reach farther than the evil of Nike, but the evil of Nike is probably a lot more serious for those on the pointy end of it. As evil as Nike is, it’s a whole lot less evil than the Wall Street companies that crashed the economy (and then blamed the wreckage on “poor people buying homes that were too expensive”), or the company you likely bank with if you use a large bank.

Me? I use a small, local credit union. And I’m still buying an iPhone.

Apple v. Samsung: Nickelgeddon and Number Illiteracy

In case you haven’t seen the news that’s been lighting up the tech sector these days, Apple recently sued Samsung for multiple patent violations concerning Samsung’s cell phones allegedly knocking off iPhone design and technology, and won, to the tune of $1 billion in fines.

There’s a rumor going around the Internet that Samsung is planning to pay the fine in nickels, shipping, or so it’s said, 30 trucks to Apple’s headquarters stuffed full of small change.

Now, that sounds wildly implausible to me, on a number of levels. First, it seems like getting one’s hands on a billion dollars’ worth of nickels would be an extraordinarily difficult thing to do. Second, it seems to me that a billion dollars’ worth of nickels would occupy one hell of a lot more than 30 trucks.

One of the things I often complain to zaiah about is something I call ‘number illiteracy’. As soon as anyone starts talking about numbers higher than a thousand or so, people’s eyes glaze over and that little drop of drool forms on the corners of their lips. A million, a hundred million, a billion…these all seem like synonyms for “really big” to a lot of folks. Hence folks complaining about the money spent on the Mars Curiosity rover without realizing that we Americans spend about the same amount on Halloween candy every October…but I digress.

Just for giggles, I did a rough, back-of-the-envelope estimate of what it would take to pay a billion dollar fine in nickels.

A billion dollars in nickels is 20 billion nickels, or roughly 64 nickels for every man, woman, and child in the entire United States. That is almost the entire number of nickels in circulation; the total number of nickels that exists is estimated by the Treasury Department to be around 25 billion or so.

A nickel weighs a sixth of an ounce, so 20 billion nickels weighs in at 208,333,333 pounds, or 104,167 tons, give or take a few hundred pounds. In the United States, a tractor trailer rig traveling on public roads is permitted to weigh no more than 80,000 pounds (gross) by law. A typical tractor trailer rig weighs in at roughly 20,000 pounds, leaving no more than 60,000 pounds for cargo. (From a quick Google search, it seems most commercial truckers won’t haul more than 50,000 pounds, but since I know fuck-all about shipping I’ll be generous and go with the 60,000 pound limit.)

At 60,000 pounds per truck, a billion dollars in nickels would require 3,473 trucks. Since a semi trailer is 53 feet long (not including the cab), the trailers, lined up end to end with no cabs, would make a row roughly 35 miles long.

I did a quick Web search to see what the shipping cost would be. From Samsung’s US headquarters to Cupertino, home of Apple, the cheapest rate I could find on my quick-and-dirty search was $503 per half ton, or $104,792,002 for the whole shebang. That’s about $105 million in shipping charges, though I bet a job this size might qualify for a bulk discount.

So now you know.

Edited to add: When zaiah and I first talked about the problem of sending a billion dollars in nickels, we were driving and didn’t have easy access to Google, so we made an even rougher back-of-the-envelope calculation, using guesswork, imagination, and the XKCD “if I can throw it, it weighs about a pound” rule. I can throw four rolls of nickels, so I guessed that four rolls would be about a pound.

The first approximation of an answer we came up with, which we figured might be within half an order of magnitude or so of the right answer, was 4,000 trucks. Later, with Google and a calculator and a lot of legwork, we came up with what you see above. So, go us!

Noted without comment: Cars and Biology

We understand automobiles. There are no homeopathic automobile repair shops, that try to repair your car by putting infinitesimal dilutions of rust in the gas tank. There are no automotive faith healers, who lay their hands on the hood and pray. People reserve such superstitions for things that they don’t understand very well, such as the human body.

–Leslie Lamport, July 2003

A Taxonomy of Fallacies

As anyone who reads this blog regularly knows, I’m a big fan of Venn diagrams. Lately, I’ve been thinking quite a lot about cognitive errors, errors in reasoning, and logical fallacies, for reasons which only coincidentally happen to coincide with the political primary season–far be it from me that the one might be in any way whatsoever connected to the other.

Anyway, I’ve put together a simple taxonomy of common fallacies. This is not, of course, an exhaustive list of fallacies; compiling such a list would surely try the patience of the most saintly. It is, however, intended to show the overlap of argumentative fallacies (arguments which by their nature and structure are invalid), logical fallacies (errors in logical reasoning), and cognitive biases (errors of human reason and our general cognitive processes).

As usual, you can clicky on the picture to embiggen it.

A quick and dirty overview of the various fallacies on this chart:

Ad Homenim: A personal attack on the person making an argument. “You’re such a moron! Only an idiot would think something like that.”

Loaded Question: An argument which presupposes its own answer, presupposes one of its own premises, or presupposes some unsupported assumption in the way it’s phrased. `”Have you stopped beating your wife yet?”

Appeal Tu Quoque: Tu quoque literally means “you also.” It’s an argument that attempts to discredit an argument not on the basis of how valid the argument is, but on the basis of some perceived inconsistency or hypocrisy in the person making it. “You say that a vegetarian diet is more healthy than a diet that is rich in red meats, but I’ve seen you eat steak so you clearly don’t even believe your own argument. Why should I?”

Guilt By Association: Also called the “association fallacy,” this is an argument which asserts that an association exists between two things which means they belong to the same class. It can be made to discredit an argument by attacking the person making it (“Bob says that we should not eat meat; the Radical Animal Liberation Terror Front supports Bob’s argument; therefore, Bob’s argument is invalid”) or to create an association to support an assertion that can not be supported on its own merits (“John is black; I was mugged by a black person; therefore, John can not be trusted”).

Straw Man: An argumentative technique that ignores a person’s actual argument and instead rebuts a much weaker argument that seems related to the original argument in some way (“Bob thinks we should treat animals with respect; the idea that animals are exactly the same as people is clearly nonsense”).

False Analogy: An argumentative technique that creates an analogy between two unrelated things and then uses the analogy to attempt to make an assertion (“The government is like a business. Since the function of a business is to make money, the government should not enact policies that do not generate revenue”).

Cherry Picking: A tactic which presents only information that supports an argument, even if other information doesn’t support it, or even the information which is presented is shown out of context to make it appear to support the argument (“Vaccination causes autism. Andrew Wakefield published one paper that shows vaccination causes autism, so it must be so–even though hundreds of other experiments and published papers show no connection, and Wakefield’s paper was determined to be fraud and retracted”).

Just World Fallacy: The tendency to believe that the world must be just, so that when bad things happen the people who they happen to must have done something wrong to bring them about, and when good things happen, the person who they happened to must have earned them. It’s both a cognitive bias (we tend to see the world this way on an emotional level even if we consciously know better) and an argumentative tactic (for example, a defense attorney defending a rapist might say that the victim was doing something wrong by being out at night in a short dress, and therefore brought the attack upon herself). Part of what makes this so cognitively powerful is the illusion of control it brings about; when we believe that bad things happen because the people they happened to were doing something wrong, we can reassure ourselves that as long as we don’t do anything wrong, those things won’t happen to us.

Appeal to Probability: An argumentative tactic in which a person argues that because something could happen, that means it will happen. Effective in large part because the human brain is remarkably poor at understanding probability. “I might win the lottery; therefore, I simply need to play often enough and I am sure to win, which will solve all my money problems.”

Fallacy of False Dichotomy: Also called the “fallacy of false choice” or the “fallacy of false dilemma,” this is an argumentative fallacy that sets up the false premise that there are only two possibilities which need to be considered when in fact there are more. “Either we cut spending on education or we rack up a huge budget deficit. We don’t want a deficit, so we have to cut spending on education.”

Fallacy of Exclusive Premises: Also called the “fallacy of illicit negative,” this is a logical and argumentative fallacy that starts with two negative premises and attempts to draw an affirmative conclusion: “No registered Democrats are registered Independents. No registered Independents vote in a closed primary. Therefore, no registered Democrats vote in a closed primary.”

Appeal to Ignorance: Also called the “argument from ignorance,” this is a rhetorical device which asserts that an argument must be true because it hasn’t been proven to be false, or that it must be false because it hasn’t been proven to be true (“we can’t prove that there is life in the universe other than on our own planet, so it must be true that life exists only on earth”). Many arguments for the existence of a god or of supernatural forces take this form.

Affirming the Consequent: A logical fallacy which asserts that a premise must be true if a consequence of the premise is true. Formally, it takes the form “If P, then Q; Q; therefore P” (for example, “All dogs have fleas; this animal has fleas; therefore, this animal is a dog”).

Denying the Antecedent: A logical fallacy that asserts that some premise is not true because a consequent is not true. Formally, it takes the form “If P, then Q; not P; therefore, not Q.” For example: “If there is a fire in this room, there must be oxygen in the air. There is no fire in this room. Therefore, there is no oxygen in the air.”

Affirming the Disjunct: Sometimes called the “fallacy of false exclusion,” this logical fallacy asserts that if one thing or another thing might be true, and the first one is true, that must mean the second one is false. For example, “Bob could be a police officer or Bob could be a liar. Bob is a police officer; therefore, Bob is not a liar.” The fallacy asserts that exactly one or the other must be true; it ignores the fact that they might both be true or they might both be false. (Note that in Boolean logic, there is an operator called “exclusive or” or “XOR” which does mean that either one thing or the other, but not both, could be true; this is not related to the logical fallacy of affirming the disjunct.)

Fallacy of Illicit Affirmative: This is the flip side of the fallacy of exclusive premises. It affirms a negative consequent from two affirmative statements. “All true Americans are patriots; some patriots are willing to fight for their country; therefore, there must be some true Americans who aren’t willing to fight for their country.”

Fallacy of Undistributed Middle: A logical fallacy that asserts that X are Y; something is a Y; therefore, something is an X. For example, “All Southern Baptists are Christians; Bob is a Christian; therefore, Bob is a Southern Baptist.” This fallacy ignores the fact that “all X are Y” does not imply that all Y must be X.

Base Rate Fallacy: A logical fallacy that involves failing to apply general information about some statistical probability (the “base rate” of something being true) to a specific example or case. For example, given information which says that HIV is three times more prevalent among homosexuals than heterosexuals, and given the information that homosexuals make up 10% of the population, most people who are told “Bob has HIV” will erroneously conclude that it is quite likely that Bob is gay, because they will consider only the fact that gays are more likely to have HIV but will not consider the “base rate” that gays make up a relatively small percentage of the population. This fallacy is extremely easy to make because of the fact that the human brain is so poor at understanding statistics and probability.

Post Hoc Ergo Propter Hoc: This is Latin for “After the fact, therefore, because of the fact.” Sometimes called the “fallacy of false cause,” it’s a logical fallacy which asserts that if one thing happens and then something else happens, the first thing caused the second thing to happen (“My child had a measles vaccine; my child was diagnosed with autism; therefore, the vaccine caused the autism”). Our brains are highly tuned to find patterns and to seek causation, to the point where we often see it even when it does not exist.

Regression Bias: This is a fallacy that’s closely related to the post hoc, ergo propter hoc fallacy in that it ascribes a false cause to an event. In this particular case, things which normally fluctuate statistically tend to return to a mean; a person may see cause in that regression to the mean even where none exist. For example, “Bob had an amazing string of successes when he was playing basketball. Then he appeared on the cover of Sports Illustrated. Afterward, his performance was more mediocre. Therefore, appearing on the cover of the magazine must have caused him to perform more poorly.” Since even good athletes will generally return to their baseline after particularly exceptional (or particularly poor) performance, appearing on the cover of the magazine is likely to be unconnected with the athlete’s performance regressing to that athlete’s normal baseline.

Argumentum Ad Nauseam: A rhetorical strategy in which a person continues to repeat something as true over and over again, even after it has been shown to be false. Some radio commentators are particularly prone to doing this: “Sandra Fluke wants the taxpayers to pay for contraception. She argues that it is the responsibility of the taxpayer to pay for her contraception. Sandra Fluke believes that contraception should be paid for by the taxpayer.”

Argument from Scripture: An argument which states that if some element in a source being cited is true, then the entire source must be true. This fallacy does not apply exclusively to holy texts or Biblical scriptures, though it is very often committed in religious arguments.

Begging the Question: Similar to the loaded question fallacy, this is an argument in which some argument assumes its own premise. Formally, it is an argument in which the conclusion which the argument claims to demonstrate is part of the premise of the argument. “We know that God exists because we see in nature examples of God’s design.” The premise of this argument assumes that nature is designed by God, which is the conclusion that the argument claims to support.

Circular Argument: This argumentative tactic is related to begging the question, but slightly different in that it uses one argument to claim to prove another, then uses the truth of the second argument to support the first. A lot of folks consider circular reasoning to be the same thing as begging the question, but they are slightly different in that the fallacy of begging the question contains the conclusion of an argument as one of its premises, whereas circular reasoning uses argument A to prove argument B, and then, having proven argument B to be true, uses argument B to prove argument A.

Appeal to Emotion, Force, or Threat: An argumentative tactic in which, rather than supplying evidence to show that an argument is correct, the person making the argument attempts to manipulate the audience’s emotions (“You must find Bob guilty of this murder. If you do not find him guilty, then you will set a dangerous murderer free to prey on your children”).

False Attribution: An argument in which a person attempts to make a position sound more credible either by attributing it to a well-known or respected source, or using a well-known and respected source’s comments out of context so as to create a false impression that that source supports the argument. As Abraham Lincoln said, more than 90% of the quotes used to support arguments on the Internet can’t be trusted!

Association Fallacy: A generalized form of the fallacy of guilt by association, an association fallacy is any argument that makes any assertion that some irrelevant similarity between two things demonstrates that those two things are related. “Bob is good at crossword puzzles. Bob also likes puns. Therefore, we can expect that Jane, who is also good at crossword puzzles, must like puns too.” Because our brains are efficient at categorizing things into groups, we are often prone to believing that categorizations are valid even when they are not.

Vividness Fallacy: Also called the “fallacy of misleading vividness,” this is the tendency to believe that especially vivid, dramatic, or exceptional events are more relevant or more statistically common than they actually are, and to pay special attention or attach special weight to such vivid, dramatic events when evaluating arguments. A common rhetorical strategy is to use vivid examples to create the impression that something is commonplace when it is not: “In New Jersey, a Viet Nam veteran was assaulted in a bar. In Vermont, an Iraqi vet was mugged at knifepoint. American citizens hate veterans!” It is effective because of a cognitive bias called the “availability heuristic,” which causes us to misjudge the statistical importance of an event if we can think of examples of that event.

Entrenchment effect: Also called the “backfire effect,” this is a tendency of people who, when presented with evidence that disproves something they think is true, will often tend to form a greater attachment to the idea that it must be true. I’ve written an essay about framing and entrenchment here.

Sunk Cost Fallacy: An error in reasoning or argument which holds that if a certain investment has been made in some course of action, then the proper thing to do is continue on that course of action so as not to waste that investment, even in the face of evidence that shows that course of action to be unlikely to succeed. In rhetoric, people will often make arguments to support a tenuous position on the basis of sunk cost rather than on the merits of the position; “We should continue to invest in this weapons project even though the engineers say it is unlikely to work because we have already spent billions of dollars on it, and you don’t want that money to be wasted, do you?” These arguments often succeed because people form emotional attachments to a position in which they feel they have made some investment that is completely detached from the value of the position itself.

Appeal to Authority: Also known as the argument from authority, this is an argument that claims that something must be true on the basis that a person who is generally respected or revered says it is true, rather than on the strength of the arguments supporting that thing. As social animals, we tend to give disproportionate weight to arguments which come from sources we like, respect, or admire.

Black Swan Effect: Also called the black swan fallacy, this is the tendency to discount or discredit information or evidence which falls outside a person’s particular range of experience or knowledge. It can take the form of “I have never seen an example of X; therefore, X does not exist;” or it can take a more subtle form (called the “confirmation fallacy”) in which a statement is held to be true because no counterexamples have been demonstrated (“I believe that black swans do not exist. Here is a swan. It is white. Here is another swan. It is also white. I have examined millions of swans, and they have all been white; with all these examples that support the idea that black swans do not exist, it must be a very reliable statement!”).

Confirmation Bias: The tendency to notice, remember, and/or give particular weight to things that fit our pre-existing beliefs; and to not notice, not remember, and/or not give weight to anything that contradicts our pre-existing beliefs. The more strongly we believe something, the more we notice and the more clearly we remember things which support that belief, and the less we notice things which contradict that belief. This is one of the most powerful of all cognitive biases.

Attention Bias: A cognitive bias in which we tend to pay particular attention to things which have some sort of emotional or cognitive resonance, and to ignore data which are relevant but which don’t have that resonance. For example, people may make decisions based on information which causes them to feel fear but ignore information that does not provoke an emotional response; a person who believes “Muslims are terrorists” may become hyperaware of perceived threatening behavior from someone he knows to be Muslim, especially when that perception reinforces his belief that Muslims are terrorists, and ignore evidence which indicates that that person is not a threat.

Choice Supportive Bias: The tendency, when remembering a choice or explaining why one has made a choice, to believe that the choice is better than actually was, or to believe that other options are worse than they actually were. For example, when choosing one of two job offers, a person may describe the job she chose as being clearly superior to the job she did not accept, even when both job offers were essentially identical.

Expectation Bias: Also sometimes called “experimenter’s bias,” this is the tendency of people to put greater trust or credence in experimental results which confirm their expectations than in results which don’t match the expectations. It also shows in the tendency of people to accept without question evidence which is offered up that tends to support their ideas, but to question, challenge, doubt, or dismiss evidence which contradicts their beliefs or expectations.

Pareidolia: The tendency to see patterns, such as faces or words, in random stimuli. Examples include people who claim to see the face of Jesus in a piece of toast, or who hear Satanic messages in music albums that are played backwards.

Rhyming Effect: The tendency of people to find statements more credible if they rhyme than if they don’t. Yes, this is a real, demonstrated cognitive bias. “If the glove don’t fit, you must acquit!”

Framing effect: The tendency to evaluate evidence or to make choices differently depending on how it is framed. I’ve written an essay about framing and entrenchment here.

Ambiguity Effect: The tendency of people to choose a course of action in which they know the exact probability of a positive outcome over a course of action in which the exact probability is not known, even if the probability of a positive outcome is generally somewhere around the same, or if the possible positive outcome is better. There’s an interactive demonstration of this effect here.

Fortunetelling: The tendency to make predictions about the outcome of a choice, and then assume that the prediction is true, and use the prediction as a premise in arguments to support that choice.

When we are young

When we are young, we imagine dragons and elves, magic and wizards, heroes swooping down on flying carpets to save the day. As we grow, we long to see these things. We long to catch a glimpse of a dragon soaring over the mountains at sunset, to see with our own eyes the magic of the elves.

We are told that there is this thing called “science,” and science takes away magic. Science says there are no wizards, no elves, no magic carpet rides, no dragons spreading their wings in the last rays of the sun. And it hurts.

For many, the impulse is to reject this thing called “science,” this destroyer of dreams, so that we can live, if even only a little bit, in the world of magic and make-believe.

But for those who do not do this, for those who want to see the world for what it is, science offers us more than our imaginations. Instead of dragons and elves, instead of wizards and magic, we are offered a universe that is ancient and huge and strange beyond our dreams. We are offered a place where galaxies gigantic beyond our comprehension collide in ferocious cataclysms of creation and destruction, where strange objects that can never be seen tear holes through the fabric of space and time, where tiny things flit around and appear in two places at once. We are offered magnificent weirdness far stranger than the paltry ordinariness of wizards and dragons–for what are wizards but men with a litany of parlor tricks, and what are dragons but flying dinosaurs with matches?

Some who reject science still see, however vaguely, the faint glimmers of the wonder that it offers, and so they seek to appropriate its fancy words to fuel their imaginings of dragons and elves. “Quantum!” they cry. “Quantum thus-and-such, which means magic is real! We make the world just by looking at it; we are rightfully the kings of creation!”

And when told that their crude and fuzzy grasp of this hateful thing called “science,” this shatterer of dreams that comes in the light of day to steal their dragons away, says no such things, but actually something else, they react with derision, and scorn, and contempt. “Science,” they say, “is just opinion. It is religion, full of popes and magistrates who declare reality to be what they want, and not what I want.”

For them, I feel sad. In their desire to wrap themselves up in the imaginations of youth, they turn their backs on things far more fantastic than they can dream.

I love science. It does not steal magic away from us; it shows us magic far more awesome than we could ever otherwise know.

Why We’re All Idiots: Credulity, Framing, and the Entrenchment Effect

The United States is unusual among First World nations in the sense that we only have two political parties.

Well, technically, I suppose we have more, but only two that matter: Democrats and Republicans. They are popularly portrayed in American mass media as “liberals” and “conservatives,” though that’s not really true; in world terms, they’re actually “moderate conservatives” and “reactionaries.” A serious liberal political party doesn’t exist; when you compare the Democratic and Republican parties, you see a lot of across-the-board agreement on things like drug prohibition (both parties largely agree that recreational drug use should be outlawed), the use of American military might abroad, and so on.

A lot of folks mistakenly believe that this means there’s no real differences between the two parties. This is nonsense, of course; there are significant differences, primarily in areas like religion (where the Democrats would, on a European scale, be called “conservatives” and the Republicans would be called “radicalists”); social issues like sex and relationships (where the Democrats tend to be moderates and the Republicans tend to be far right); and economic policy (where Democrats tend to be center-right and Republicans tend to be so far right they can’t tie their left shoe).

Wherever you find people talking about politics, you find people calling the members of the opposing side “idiots.” Each side believes the other to be made up of morons and fools…and, to be fair, each side is right. We’re all idiots, and there are powerful psychological factors that make us idiots.


The fact that we think of Democrats as “liberal” and Republicans as “conservative” illustrates one ares where Republicans are quite different from Democrats: their ability to frame issues.

The American political landscape for the last three years by a great deal of shouting and screaming over health care reform.

And the sentence you just read shows how important framing is. Because, you see, we haven’t actually been discussing health care reform at all.

Despite all the screaming, and all the blogging, and all the hysterical foaming on talk radio, and all the arguments online, almost nobody has actually read the legislation signed after much wailing and gnashing into law by President Obama.

And if you do read it, there’s one thing about it that may jump to your attention: It isn’t about health care at all. It barely even talks about health care per se. It’s actually about health insurance. It provides a new framework for health insurance legislation, it restricts health insurance companies’ ability to deny coverage on the basis of pre-existing conditions, it seeks to make insurance more portable..in short, it is health insurance reform, not health care reform. The fact that everyone is talking about health care reform is a tribute to the power of framing.


In any discussion, the person who controls how the issue at question is shaped controls the debate. Control the framing and you can control how people think about it.

Talking about health care reform rather than health insurance reform leads to an image in people’s minds of the government going into a hospital operatory or a doctor’s exam room and telling the doctor what to do. Talking about health insurance reform gives rise to mental images of government beancounters arguing with health insurance beancounters about the proper way to notate an exemption to the requirements for filing a release of benefits form–a much less emotionally compelling image.

Simply by re-casting “health insurance reform” as “health care reform,” the Republicans created the emotional landscape on which the war would be fought. Middle-class working Americans would not swarm to the defense of the insurance industry and its über-rich executives. Recast it as government involvement between a doctor and a patient, however, and the tone changed.

Framing matters. Because people, by and large, vote their identity rather than their interests, if you can frame an issue in a way that appeals to a person’s sense of self, you can often get him to agree with you even if by agreeing with you he does harm to himself.

I know a woman who is an atheist, non-monogamous, bisexual single mom who supports gay marriage. In short, she hits just about every ticky-box in the list of things that “family values” Republicans hate. The current crop of Republican political candidates, all of them, have at one point or another voiced their opposition to each one of these things.

Yet she only votes Republican. Why? Because she says she believes, as the Republicans believe, that poor people should just get jobs instead of lazing about watching TV and sucking off hardworking taxpayers’ labor.

That’s the way we frame poverty in this country: poor people are poor because they are just too lazy to get a fucking job already.

That framing is extraordinarily powerful. It doesn’t matter that it has nothing to do with reality. According to the US Census Bureau, as of December 2011 46,200,000 Americans (or 15.1% of the total population) live in poverty. According to the US Department of Labor, 11.7% of the total US population had employment but were still poor. In other words, the vast majority of poor people have jobs–especially when you consider that some of the people included in the Census Bureau’s statistics are children, and therefore not part of the labor force.

Framing the issue of poverty as “lazy people who won’t get a job” helps deflect attention away from the real causes of poverty, and also serves as a technique to manipulate people into supporting positions and policies that act against their own interests.

But framing only works if you do it at the start. Revealing how someone has misleadingly framed a discussion after it has begun is not effective at changing people’s attention because of a cognitive bias called the entrenchment effect.


A recurring image in US politics is the notion of the “welfare queen”–a hypothetical person, invariably black, who becomes wealthy by living on government subsidies. The popular notion has this black woman driving around the low-rent neighborhood in a Cadillac, which she bought by having dozens and dozens of babies so that she could receive welfare checks for each one.

The notion largely traces back to Ronald Reagan, who during his campaign in 1976 talked over and over (and over and over and over and over) about a woman in Chicago who used various aliases to get rich by scamming huge amounts of welfare payments from the government.

The problem is, this person didn’t exist. She was entirely, 100% fictional. The notion of a “welfare queen” doesn’t even make sense; having a lot of children but subsisting only on welfare doesn’t increase your standard of living, it lowers it. The extra benefits given to families with children do not entirely offset the costs of raising children.

Leaving aside the overt racism in the notion of the “welfare queen” (most welfare recipients are white, not black), a person who thinks of welfare recipients this way probably won’t change his mind no matter what the facts are. We all like to believe ourselves to be rational; we believe we have adopted our ideas because we’ve considered the available information rationally, and that if evidence that contradicts our ideas is presented, we will evaluate it rationally. But nothing could be further from the truth.

In 2006, two researchers at the University of Michigan, Brendan Nyhan and Jason Reifler, did a study in which they showed people phony studies or articles supporting something that the subjects believed. They then told the subjects that the articles were phony, and provided the subjects with evidence that showed that their beliefs were actually false.

The result: The subjects became even more convinced that their beliefs were true. In fact, the stronger the evidence, the more insistently the subjects clung to their false beliefs.

This effect, which is now referred to as the “entrenchment effect” or the “backfire effect,” is very common among people in general. A person who holds a belief who is shown hard physical evidence that the belief is false comes away with an even stronger belief that it is true. The stronger the evidence, the more firmly the person holds on.

The entrenchment effect is a form of “motivated reasoning.” Generally speaking, what happens is that a person who is confronted with a piece of evidence showing that his beliefs are wrong will respond by mentally going through all the reasons he started holding that belief in the first place. The stronger the evidence, the more the person repeats his original line of reasoning. The more the person rehearses the original reasoning that led him to the incorrect belief, the more he believes it to be true.

This is especially true if the belief has some emotional vibrancy. There is a part of the brain called the amygdala which is, among other things, a kind of “emotional memory center.” That’s a bit oversimplified, but essentially true; when you recall a memory that has an emotional charge, the amygdala mediates your recall of the emotion that goes along with the memory; you feel that emotion again. When you rehearse the reasons you first subscribed to your belief, you re-experience the emotions again–reinforcing it and making it feel more compelling.

This isn’t just a right/left thing, either.

Say, for example, you’re afraid of nuclear power. A lot of people, particularly self-identified liberals, are. If you are presented with evidence that shows that nuclear power, in terms of human deaths per terawatt-hour of power produced, is by far the safest of all forms of power generation, it is unlikely to change your mind about the dangers of nuclear power one bit.

The most dangerous form of power generation is coal. In addition to killing tens of thousands of people a year, mostly because of air pollution, coal also releases quite a lot of radiation into the environment. This radiation comes from two sources. First, some of the carbon that coal is made of is in the naturally occurring radioactive isotope carbon-14; when the coal is burned, this combines with oxygen to produce radioactive gas that goes out the smokestack. Second, coal beds contain trace amounts of radioactive uranium and thorium, which remain in the ash when it’s burned; coal plants consume so much coal–huge freight trains of it–that the resulting fly ash left over from burning those millions of tons of coal is more radioactive than nuclear waste. So many people die directly or indirectly as a result of coal-fired power generation that if we had a Chernobyl-sized meltdown every four years, it would STILL kill fewer people than coal.

If you’re afraid of nuclear power, that argument didn’t make a dent in your beliefs. You mentally went back over the reasons you’re afraid of nuclear power, and your amygdala reactivated your fear…which in turn prevented you from seriously considering the idea that nuclear might not be as dangerous as you feel it is.

If you’re afraid of socialism, then arguments about health reform won’t affect you. It won’t matter to you that health care reform is actually health insurance reform, or that the supposed “liberal” health care reform law was actually mostly written by Republicans (many of the health insurance reforms in the Federal package are modeled on similar laws written by none other than Mitt Romney; the provisions expanding health coverage for children were written by Republican senator Orrin Hatch (R-Utah); and the expansion of the Medicare drug program were written by Republican Representative Dennis Hastert (R-Illinois)), or that it’s about as Socialist as Goldman-Sachs (the law does not nationalize hospitals, make doctors into government employees, or in any other way socialize the health care infrastructure). You will see this information, you will think about the things that originally led you to see the Republican health-insurance reform law as “socialized Obamacare,” and you’ll remember your emotional reaction while you do it.

Same goes for just about any argument with an emotional component–gun control, abortion, you name it.

This is why folks on both sides of the political divide think of one another as “idiots.” That person who opposes nuclear power? Obviously an idiot; only an idiot could so blindly ignore hard, solid evidence about the safety of nuclear power compared to any other form of power generation. Those people who hate Obamacare? Clearly they’re morons; how else could they so easily hang onto such nonsense as to think it was written by Democrats with the purpose of socializing medicine?

Clever framing allows us to be led to beliefs that we would otherwise not hold; once there, the entrenchment effect keeps us there. In that way, we are all idiots. Yes, even me. And you.

Purity Bear: a creepy talking animal that preaches abstinence

I wish I could say tat this is a parody, but it’s not. The folks behind the “Day of Purity” have released an unsettling video in which a creepy bear tells a kid “She may be cuddly, but look at me! I’m cuddly too!” to get him to say “no” to going in the house with his girlfriend.

Will the day ever come when these folks realize that preaching abstinence doesn’t work? How high do the rates of teen pregnancy have to get in the Bible Belt before folks figure this out?

Personally, I’m waiting for the inevitable: a newspaper runs a story involving Purity Bear being caught on videotape doing the nasty with PedoBear in some seedy Detroit motel bathroom.

Science Literacy: Of Pickles and Probability

STUDY PROVES THAT PLACING A PICKLE ON YOUR NOSE GRANTS PSYCHIC POWERS

For immediate release: Scientists at the Min Planck Institute announced today that placing a pickle on your nose can improve telekinetic ability.

According to the researchers, they performed a study in which a volunteer was asked to place a pickle on her nose and then flip a coin to see whether or not the pickle would help her flip heads. The volunteer flipped the coin, which came up heads.

“This is a crowning achievement for our research,” the study’s authors said. “Our results show that having a pickle on your nose allows you to determine the outcome of a coin-toss.”

Let’s say you’re browsing the Internet one day, and you come across this report. Now, you’d probably think that there was something hinkey about this experiment, right? We know intuitively that the odds of a coin toss coming up heads are about 50/50, so if someone puts a pickle on her nose and flips a coin, that doesn’t actually prove a damn thing. But we might not know exactly how that applies to studies that don’t involve flipping coins.


So let’s talk about our friend p. This is p.

p represents the probability that a scientific study’s results are total bunk. Formally, it’s the probability that results like the ones observed could occur even if the null hypothesis is true. In English, that basically means that it represents how likely it is to get these results even if whatever the study is trying to show doesn’t actually exist at all, and so the study’s results don’t mean a damn thing.

Every experiment (or at least every experiment seeking to show a relationship between things) has a p value. In the nose-pickle experiment, the p value is 0.5, which means there is a 50% chance that the subject would flip heads even if there’s no connection between the pickle on her nose and the results of the experiment.

There’s a p value associated with any experiment. For example if someone wanted to show that watching Richard Simmons on television caused birth defects, he might take two groups of pregnant ring-tailed lemurs and put them in front of two different TV sets, one of them showing Richard Simmons reruns and one of them showing reruns of Law & Order, to see if any of the lemurs had pups that were missing legs or had eyes in unlikely places or something.

But here’s the thing. There’s always a chance that a lemur pup will be born with a birth defect. It happens randomly.

So if one of the lemurs watching Richard Simmons had a pup with two tails, and the other group of lemurs had normal pups, that wouldn’t necessarily mean that watching Mr. Simmons caused birth defects. The p value of this experiment is related to the probability that one out of however many lemurs you have will randomly have a pup with a birth defect. As the number of lemurs gets bigger, the probability of one of them having a weird pup gets bigger. The experiment needs to account for that, and the researchers who interpret the results need to factor that into the analysis.


If you want to be able to evaluate whether or not some study that supposedly shows something or other is rubbish, you need to think about p. Most of the time, p is expressed as a “less than or equal to” thing, as in “This study’s p value is <= 0.005″. That means “We don’t know exactly what the p value is, but we know it can’t be greater than one half of one percent.”

A p value of 0.005 is pretty good; it means there’s a 0.5% chance that the study is rubbish. Obviously, the larger the p value, the more skeptical you should be of a study. A p value of 0.5, like with our pickle experiment, shows that the experiment is pretty much worthless.

There are a lot of ways to make an experiment’s p value smaller. With the pickle experiment, we could simply do more than one trial. As the number of coin tosses goes up, the odds of a particular result go down. If our subject flips a coin twice, the odds of getting a heads twice in a row are 1 in 4, which gives us a p value of 0.25–still high enough that any reasonable person would call rubbish on a positive trial. More coin tosses still give successively smaller p values; the p value of our simple experiment is given (roughly) by 1/2n, where n is the number of times we flip the coin.


There’s more than just the p value to consider when evaluating a scientific study, of course. The study still needs to be properly constructed and controlled. Proper control groups are important for eliminating confirmation bias, which is a very powerful tendency for human beings to see what they expect to see and to remember evidence that supports their preconceptions while forgetting evidence which does not. And, naturally, the methodology has to be carefully implemented too. A lot goes into making a good experiment.

And even if the experiment is good, there’s more to deciding whether or not its conclusions are valid than looking at its p value. Most experiments are considered pretty good if they have a p value of .005, which means there’s a 1 in 200 chance that the results could be attributed to pure random chance.

That sounds like it’s a fairly good certainty, but consider this: That’s about the same as the odds of flipping heads on a coin 8 times in a row.

Now, if you were to flip a coin eight times, you’d probably be surprised if it landed on heads every single time.

But, if you were to flip a coin eight thousand times, it would be surprising if you didn’t get eight heads in a row somewhere in there.


One of the hallmarks of science is replicability. If something is true, it should be true no matter how many people run the experiment. Whenever an experiment is done, it’s never taken as gospel until other people also do it. (Well, to be fair, it’s never taken as gospel period; any scientific observation is only as good as the next data.)

So that means that experiments get repeated a lot. And when you do something a lot, sometimes, statistical anomalies come in. If you flip a coin enough times, you’re going to get eight heads in a row, sooner or later. If you do an experiment enough times, you’re going to get weird results, sooner or later.

So a low p value doesn’t necessarily mean that the results of an experiment are valid. In order to figure out if they’re valid or not, you need to replicate the experiment, and you need to look at ALL the results of ALL the trials. And if you see something weird, you need to be able to answer the question “Is this weird because something weird is actually going on, or is this weird because if you toss a coin enough times you’ll sometimes see weird runs?”

That’s where something called Bayesian analysis comes in handy.

I’m not going to get too much into it, because Bayesian analysis could easily make a post (or a book) of its own. In this context, the purpose of Bayesian analysis is to ask the question “Given the probability of something, and given how many times I’ve seen it, could what I’m seeing can be put down to random chance without actually meaning squat?”

For example, if you flip a coin 50 times and you get a mix of 30 heads and 20 tails, Bayesian analysis can answer the question “Is this just a random statistical fluke, or is this coin weighted?”

When you evaluate a scientific study or a clinical trial, you can’t just take a single experiment in isolation, look at its p value, and decide that the results must be true. You also have to look at other similar experiments, examine their results, and see whether or not what you’re looking at is just a random artifact.


I ran into a real-world example of how this can fuck you up a bit ago, where someone on a forum I belong to posted a link to an experiment that purports to show that feeding genetically modified corn to mice will cause health problems in their offspring. The results were (and still are) all over the Internet; fear of genetically modified food is quite rampant among some folks, especially on the political left.

The experiment had a p value of <= .005, meaning that if the null hypothesis is true (that is, there is no link between genetically modified corn and the health of mice), we could expect to see this result about one time in 200.

So it sounds like the result is pretty trustworthy…until you consider that literally thousands of similar experiments have been done, and they have shown no connection between genetically modified corn and ill health in test mice.

If an experiment’s p value is .005, and you do the experiment a thousand times, it’s not unexpected that you’d get 5 or 6 “positive” results even if the null hypothesis is true. This is part of the reason that replicability is important to science–no matter how low your p value may be, the results of a single experiment can never be conclusive.