The world’s first 3D printed gun: Ho hum.

Today, a landmark in improvised engineering was reached. Plans for an (almost) entirely 3D printable firearm went up on the Internet, able to be freely downloaded by anyone.

The reactions around the Net are predictable. Libertarians and gun nuts are ecstatic, gushing all over themselves about how this will be the “end of gun control” and usher in some kind of “new age of freedom” or something.

Law and order types, gun control advocates, and the government are wetting themselves with the prospect of legions of terrorists printing up virtually undetectable firearms and taking over airplanes or something.

And it’s all completely ridiculous. Neither a new age of freedom nor a new age of terror are in the works; in fact, I’m quite confident in predicting the total impact of this technology will be statistically undetectable. Self-congratulatory (on the one side) and paranoid (on the other) ravings aside, this thing simply does not make any meaningful difference whatsoever.

First, let’s see this harbinger of freedom end of civilization toy for rich white kids:

It’s printed from ABS plastic on an $8,000 3D printer. Almost everything is plastic, including the barrel; the only non-plastic parts are an ordinary nail (for the firing pin) and the bullet itself (in this case, a .380 caliber).

Now, I’ve owned firearms and shot recreationally for most of my life,1 and the first thing I can say upon seeing this thing is that I wouldn’t want to fire it. My instinct is that it’s probably about as dangerous to whoever’s on the trigger end as whoever’s on the business end.

The one shown here was test-fired three times. The first time, it misfired. The second time, it successfully fired a .380 round without destroying itself. The third time, when the .380 was replaced with a 5.7×28 cartridge, it exploded.

Could it survive multiple shots with the smaller round? I don’t know. Maybe. I wouldn’t bet my life on it. Doesn’t really matter. Not only is this thing not a game changer, I reckon it’s about as significant in terms of its overall impact on society as whatever toy they choose to put into a box of Cracker Jacks next week.

For starters, what you’re looking at here is not only a shoddy firearm of dubious reliability and ruggedness; it’s an $8,050 $9,000 shoddy firearm of dubious reliability and ruggedness. This prototype was printed on an $8,000 3D printer with about $50 worth of materials, making it arguably the single most expensive zip gun that’s ever been fabricated. A person looking for cheap, untraceable guns would be able to buy an arsenal on the street for less than the cost of the printer that produced this thing. (Edit: It turns out that this gun actually requires $1,000 worth of plastic toner to print, making it arguably the most expensive zip gun ever made even if the cost of the 3D printer isn’t factored in.)

Now, I already know what you’re going to say. The cost of 3D printers is dropping quickly. People can rent one or use one at a school. Companies will 3D print parts for you.

All of which is true, but irrelevant; the ability to make crude, cheap firearms for a lot less than just the cost of the plastic alone for this thing has existed…well, for about as long as firearms have existed. Prisoners have been known to build guns from parts available in prisons.

It has never been lack of availability that has kept people from using small single-shot firearms like this. The reason every criminal in town isn’t sticking up convenience stores with zip guns isn’t that they have been languishing in wait for a Libertarian college student to design one that can be 3D printed and put on the Internet; it’s that these things are virtually worthless as weapons. They tend to be used in prisons but few places besides, because they’re unreliable, prone to failure, inaccurate, and dangerous to the operator.

Just like, ahem, the 3D printed version.

Seriously. Even when they work, you have to be at point-blank range (or better yet, in contact with your intended target) for them to be terribly effective.

Which leads to the next hand-wringing objection: OMG this is made of PLASTIC you can take it onto an AIRPLANE through a METAL DETECTOR!

Which is, err, only kind of true. It’s a bit bulky to hide on your person, and there’s still the fact that the firing pin and ammunition are metal. Now, you might be able to get a nail through security on some pretext or other, but I doubt many folks will let you carry ammunition onto a plane.

If they notice it, which is a different matter; I’ve had friends who’ve carried brass knuckles and switchblades onto planes without difficulty. The reality is that few people actually want to, and have the means to, attack an airplane; nearly all of what happens at the airport is security theater, not security.

But let’s assume just for amusement that you can get one of these onto a plane. So what? What of it?

If I wanted to attack an airplane with a weapon I made on a 3D printer, it wouldn’t be this gun. Even if it works, it only works once, and I doubt the other passengers would sit around idle while I reloaded it and prepared to fire again. Assuming that the first shot actually did any good anyway.

The guy who designed this says “You can print a lethal device. It’s kind of scary, but that’s what we’re aiming to show,” as if this is the first time that’s been possible. Sorry, kid, but you’re a ridiculous wanker; a 3D printed knife or spear is actually a lot more lethal than this toy gun. (There’s a reason shivs rather than zip guns are the preferred weapon in places like prisons, and it’s not all down to scarcity of ammunition; given how easily drugs flow into American prisons, ammo isn’t that much of a stretch if there were a demand for it.) The 9/11 hijackers, who were well-funded, used…box cutters.

But I wouldn’t carry a 3D printed knife, or even a cheaper and better ceramic knife, onto a plane with mischief in mind either, because I’m not suicidal. Post 9/11, one thing has actually made air travel safer: the fact that the other passengers aren’t about to sit quietly by and hope for the best if someone tries to take a plane. All the other security changes that have happened since then have paled in effectiveness next to passenger attitude.

So, here’s the million-dollar question. You take a plastic gun onto an airplane, and…what, exactly? What in the name of the seven holy fucks and the twelve lesser fucks do you do then? What’s your plan?

If your goal is to destroy the plane, you can’t do that with this thing. If your goal is to take over the plane, well…good luck with that. You might survive what the other passengers do to you, maybe, if you’re lucky. Everybody is shrieking about how this thing can defeat airline security…and then what?

In fact, that million-dollar question can be extended to just about any possible use for this thing. You’ve bought yourself an eight-grand 3D printer, or somehow got access to it. You download the plans like an eager little hacker and you print this out, and then you…um, what do you do then? Go online and brag to your Maker friends?

You aren’t going to use this for home defense. I mean, seriously. A baseball bat or a tire iron makes a better home defense weapon, and the baseball bat probably has a longer effective range.

You’re not going to use it to outfit your secret militia that’s pining for anticipating the day that the Federal government starts rolling the tanks down Main Street. You aren’t even going to use an AR-15 for that, because, listen, seriously? The government has drones. They can blow your ass to hell and gone and you’ll never even see someone to shoot at.

You aren’t going to take it down to the range and pop off a few rounds in the general direction of paper cutouts of zombies or Trayvon Martin. No gun range is going to let you anywhere near the firing line with this; it’s too dangerous to the other shooters.

And please, please tell me you think you can go hunting with this thing. Bring a video camera and let me know when the video is up on YouTube. You can’t get enough of that for my entertainment dollar.

So you’re going to print it out, you’re going to put it together, and then…what, exactly? I’m still not clear on that.

Now, if you designed it, what you’ll do is obvious: you’ll get media exposure for congratulating yourself on what a clever Libertarian you are. And as near as I can tell, that’s really this thing’s only usefulness.

1 Full disclosure: I’ve been a private firearm owner on and off since 1988. I like guns, I like target shooting, and I’m neither opposed to nor afraid of guns. All that being said, I still won’t fire one of these.

The Apocalypse Is Coming! (…again)

In less than three weeks, the end of the world will happen.

Or, rather, in less than three weeks, a bunch of Mayan-prophesy doomsdayers will wake up and, if they have any grace at all, feel slightly sheepish.

The Mayan epic calendar is set to expire on December 21, or so it seems, and a lot of folks think this will signal the end of the world. They really, truly, sincerely believe it; some of them have even written to NASA with their concerns that a mysterious Planet X will smash into Earth on the designated date. (There seems to be some muddling of New Age thought here, as the existence of this “planet X,” sometimes called Nibiru, is a fixture amongst certain segments of the New Age population, its existence allegedly described in ancient Sumarian texts.)

It’s easy to dismiss these people as gullible crackpots, uneducated and foolish, unable to see how profoundly stupid their fears are. But I’m not so sure it’s that simple.

Apocalyptic fears are a fixture of the human condition. The Mayan doomsday nonsense is not the first such fearful prediction; it’s not even the first one to grab recent public attention. Harold Camping, an Evangelical Christian, predicted the end of the world on October 21, 2011…and also on May 21, 2011, September 7, 1994, and May 21, 1988. He got enough folks worked up about his 2011 predictions that many of his followers sold their belongings and caravanned across the country warning people of the impending Apocalypse.

These kinds of predictions have existed for, as near as I can tell, as long as human beings have had language. Pat Robertson has been in on the action, predicting the Great Tribulation and the coming of Jesus in 2007. These fears are so common that a number of conservative politicians, including Sarah Palin, believe that the current generation is the last one the world will see.

Given how deeply-woven these apocalyptic fears are in the human psyche, it seems to me they speak to something important. I believe that, at least for some people, such fears of impending doomsday actually offer protection against an even deeper fear: the fear of irrelevance.

My readership being what it is, I bet the percentage of you who recognize this picture is probably higher than the percentage of the population as a whole who recognize it.

This is part of the Standard of Ur, an artifact recovered from archaeological digs from the site of Ur, one of the world’s oldest cities, in what is now present-day Iraq.

Ur was likely first settled somewhere around 3800 BC, or roughly six thousand years ago, give or take. That puts its earliest settlement at about the start of the Bronze Age, plus or minus a century or so. The Agrarian Revolution was already well-established, but metallurgy was fairly new. When it was built, it was a coastal city; that was so long ago that the land itself has changed, and the ruins of Ur are now well inland.

You’ve probably at least heard of Ur; most public schools mention it in passing in history classes, at least back when I was a schoolkid. Unless you’re a history major, you probably don’t know much about it, and certainly don’t know a whole lot about life there. Unless you’re a history major, you probably don’t think about it a whole lot, either.

Think about that for a minute. Ur was a major center of civilization–arguably, the center of civilization–for centuries. History records it as an independent, powerful city-state in the 26th century BC, more than a thousand years after it was founded. People were born, lived, loved, struggled, rejoiced, plotted, schemed, invented, wrote, sang, prayed, fished, labored, experienced triumph and heartbreak, and died there for longer than many modern countries have even existed, and you and I, for the most part, don’t care. Most of us know more about Luke Skywalker than any of the past rulers of Ur, and that’s okay with us. We have only the vaguest of ideas that this place kinda existed at some vague point a long time ago, even though it was among the most important places in all the world for a total of more than three thousand years, if you consider its history right up to the end of the Babylonians.

And that, I think, can tell us a lot about the amazing persistence of apocalyptic doomsday fears.

When I was a kid, I was fascinated by astronomy. I wanted to grow up to be an astronomer, and even used a little Dymo labelmaker to make a label that said “Franklin Veaux, Astrophysicist” that I stuck on my bedroom door.

Then I found out that some day, the sun would burn out and the earth would become a lifeless lump of rock orbiting a small, cold cinder. And that all the other stars in the sky would burn out. And that all the stars that would come after them would one day burn out, too.

The sense of despair I felt when I learned that permanently changed me.

Think about everything you know. Think about everything you’ve ever said or done, every cause you believe in, every hero and villain you’ve ever encounter, every accomplishment you’ve ever made.

Now think about all of that mattering as much to the world as the life of an apprentice pot-maker in Ur means to you.

It’s one thing to know we are going to die; we all have to deal with that, and we construct all kinds of myths and fables, all sorts of afterlives where we are rewarded with eternal bliss while people we don’t like are forever punished for doing the things we don’t think they should do. But to die, and then to become irrelevant? To die and to know that everything we dreamed of, did, or stood for was completely forgotten, and humanity just went along without us, not even caring that we existed at all? It’s reasonable, I think, for people to experience a sense of despair about that.

But, ah! What if this is the End of Days? What if the world will cease to be in our lifetimes? Now we will never experience that particular fate. Now we no longer have to deal with the idea that everything we know will fade away. There will be no more generations a thousand or ten thousand years hence to have forgotten us; we’re it.

Just think of all the advantages of living in the End Days. We don’t have to face the notion that not only ourselves, but our ideas, our values, our morality, our customs, our traditions, all will fade away and people will get along just fine without us.

And think of the glory! There is a certain reflected glory just in being a person who witnesses an epic thing, even if it’s only from the sidelines. Imagine being in the Afterlife, and having Socrates and Einstein and Buddha saying to us, “Wow, you were there when the Final Seal was broken? That’s so cool! Tell us what it was like?”

Human nature being what it is, there’s also that satisfaction that comes from watching all the world just burn down around you. That will teach them, all those smug bastards who disagreed with us and lived their lives differently from the way we did! As fucked-up as it may be, there’s comfort in that.

Most of us, I suspect, aren’t really equipped to deal with the notion that everything we believe is important will probably turn out not to be. If we were to find ourselves transported a thousand, two thousand, ten thousand years from now, assuming human beings still exist, they will no doubt be very alien to us–as alien as Chicago would be to an ancient Sumerian.

They won’t speak our language, or anything like it; human languages rarely last more than six hundred years or more. Everything we know will be not only gone, but barely even recognized…if there’s anything left of, say, New York City, it will likely not exist much beyond an archaeological dig and some dry scholarly papers full of conjecture and misinformation. For people who live believing in tradition and hierarchy and authority and continuity, the slow and steady evaporation of all those things is worse than the idea of death. Belief in the End Times is a powerful salve to all of that.

Given the transience of all human endeavor, it makes a certain kind of sense. The alternative, after all, is…what? Cynicism? Nihilism? If everything that we see, do, think, feel, believe, fight for, and sacrifice for is going to mean as much to future generations as the lives of the citizens of Ur four thousand years ago mean to us, what’s the point of any of it? Why believe in anything?

Which, I think, misses the point.

We live in a world of seven billion people, and in all that throng, each of us is unique. We have all spent tens of billions of years not existing. We wake up in the light, alive and aware, for a brief time, and then we return to non-existence. But what matters is that we are alive. It’s not important if that matters a thousand years from now, any more than it matters that it wasn’t important a thousand years ago; it does matter to us, right here, right now. It matters because the things we believe and the things we do have the power to shape our happiness, right here, and if we can not be happy, then what is the point of this brief flicker of existence?

Why should we fight or sacrifice for anything? Because this life is all we have, and these people we share this world with are our only companions. Why should we care about causes like, say, gay rights–causes which in a thousand years will mean as much as campaigns to allow women to appear on stage in Shakespeare’s time? Because these are the moments we have, and this is the only life that we have, and for one group of people to deprive another group of people the opportunity to live it as best suits them harms all of us. If we are to share this world for this brief instant, if this is all we have, then mutual compassion is required to make this flicker of awareness worthwhile. This, ultimately, is the antidote to the never-ending stream of apocalyptic prophesy.

Some (More) Thoughts on Brain Modeling and the Coming Geek Rapture

The notion of “uploading”–analyzing a person’s brain and then modeling it, neuron by neuron, in a computer, thereby forever preserving that person’s knowledge and consciousness–is a fixture of transhumanist thought. In fact, self-described “futurists” like Ray Kurzweil will gladly expound at great length about how uploading and machine consciousness are right around the corner, and Any Day Now we will be able to live forever by copying ourselves into virtual worlds.

I’ve written extensively before about why I think that’s overly optimistic, and why Ray Kurzweil pisses me off. Our understanding of the brain is still remarkably poor–for example, we’re only just now learning how brain cells called “glial cells” are involved in the process of cognition–and even when we do understand the brain on a much deeper level, the tools for being able to map the connections between the cells in the brain are still a long way off.

In that particular post, I wrote that I still think brain modeling will happen; it’s just a long way off.

Now, however, I’m not sure it will ever happen at all.

I love cats.

Many people love cats, but I really love cats. It’s hard for me to see a cat when I’m out for a walk without wanting to make friends with it.

It’s possible that some of my love of cats isn’t an intrinsic part of my personality, in the sense that my personality may have been modified by a parasite commonly found in cats.

This is the parasite, in a color-enhanced scanning electron micrograph. Pretty, isn’t it? It’s called Toxoplasma gondii. It’s a single-celled organism that lives its life in two stages, growing to maturity inside the bodies of rats, and reproducing in the bodies of cats.

When a rat is infected, usually by coming into contact with cat droppings, the parasite grows but doesn’t reproduce. Its reproduction can only happen in a cat, which becomes infected when it eats an infected rat.

To help ensure its own survival, the parasite does something amazing. It controls the rat’s mind, exerting subtle changes to make the rat unafraid of cats. Healthy rats are terrified of cats; if they smell any sign of a cat, even a cat’s urine, they will leave an area and not come back. Infected rats lose that fear, which serves the parasite’s needs by making it more likely the rat will be eaten by a cat.

Humans can be infected by Toxoplasma gondii, but we’re a dead end for the parasite; it can’t reproduce in us.

It can, however, still work its mind-controlling magic. Infected humans show a range of behavioral changes, including becoming more generous and less bound by social mores and customs. They also appear to develop an affinity for cats.

There is a strong likelihood that I am a Toxoplasma gondii carrier. My parents have always owned cats, including outdoor cats quite likely to have been exposed to infected rats. So it is quite likely that my love for cats, and other, more subtle aspects of my personality (bunny ears, anyone?), have been shaped by the parasite.

So, here’s the first question: If some magical technology existed that could read the connections between all of my brain cells and copy them into a computer, would the resulting model act like me? If the model didn’t include the effects of Toxoplasma gondii infection, how different would that model be from who I am? Could you model me without modeling my parasites?

It gets worse.

The brain models we’ve built to date are all constructed from generic building blocks. We model neurons as though they are variations on a common theme, responding pretty much the same way. These models assume that the neurons in Alex’s head behave pretty much the same way as the neurons in Bill’s head.

To some extent, that’s true. But we’re learning that there can be subtle genetic differences in the way that neurons respond to different neurotransmitters, and these subtle differences can have very large effects on personality and behavior.

Consider this protein. It’s a model of a protein called AVPR-1a, which is used in brain cells as a receptor for the neurotransmitter called vasopressin.

Vasopressin serves a wide variety of different functions. In the body, it regulates water retention and blood pressure. In the brain, it regulates pair-bonding, stress, aggression, and social interaction.

A growing body of research shows that human beings naturally carry slightly different forms of the gene that produce this particular receptor, and that these tiny genetic differences result in tiny structural differences in the receptor which produce quite significant differences in behavior. For example, one subtle difference in the gene that produces this receptor changes the way that men bond to partners after sex; carriers of this particular genetic variation are less likely to experience intense pair-bonding, less likely to marry, and more likely to divorce if they do marry.

A different variation in this same gene produces a different AVPR-1a receptor that is strongly linked to altruistic behavior; people with that particular variant are far more likely to be generous and altruistic, and the amount of altruism varies directly with the number of copies of a particular nucleotide sequence within the gene.

So let’s say that we model a brain, and the model we use is built around a statistical computation for brain activation based on the most common form of the AVPR-1a gene. If we model the brain of a person with a different form of this gene, will the model really represent her? Will it behave the way she does?

The evidence suggests that, no, it won’t. Because subtle genetic variations can have significant behavioral consequences, it is not sufficient to upload a person using a generic model. We have to extend the model all the way down to the molecular level, modeling tiny variations in a person’s receptor molecules, if we wish to truly upload a person into a computer.

And that leads rise to a whole new layer of thorny moral issues.

There is a growing body of evidence that suggests that autism spectrum disorders are the result in genetic differences in neuron receptors, too. The same PDF I linked to above cites several studies that show a strong connection between various autism-spectrum disorders and differences in receptors for another neurotransmitter, oxytocin.

Vasopressin and oxytocin work together in complex ways to regulate social behavior. Subtle changes in production, uptake, and response to either or both can produce large, high-level changes in behavior, and specifically in interpersonal behavior–arguably a significant part of what we call a person’s “personality.”

So let’s assume a magic brain-scanning device able to read a person’s brain state and a magic computer able to model a person’s brain. Let’s say that we put a person with Asperger’s or full-blown autism under our magic scanner.

What do we do? Do we build the model with “normal” vasopressin and oxytocin receptors, thereby producing a model that doesn’t exhibit autism-spectrum behavior? If we do that, have we actually modeled that person, or have we created an entirely new entity that is some facsimile of what that person might be like without autism? Is that the same person? Do we have a moral imperative to model a person being uploaded as closely as possible, or is it more moral to “cure” the autism in the model?

In the previous essay, I outlined why I think we’re still a very long ways away from modeling a person in a computer–we lack the in-depth understanding of how the glial cells in the brain influence behavior and cognition, we lack the tools to be able to analyze and quantify the trillions of interconnections between neurons, and we lack the computational horsepower to be able to run such a simulation even if we could build it.

Those are technical objections. The issue of modeling a person all the way down to the level of genetic variation in neurotransmitter and receptor function, however, is something else.

Assuming we overcome the limitations of the first round of problems, we’re still left with the fact that there’s a lot more going on in the brain than generic, interchangeable neurons behaving in predictable ways. To actually copy a person, we need to be able to account for genetic differences in the structure of receptors in the brain…

…and even if we do that, we still haven’t accounted for the fact that organisms like Toxoplasma gondii can and do change the behavior of the brain to suit their own ends. (I would argue that a model of me that was faithful clear down to the molecular level probably wouldn’t be a very good copy if it didn’t include the effects that the parasite have had on my personality–effects that we still have no way to quantify.)

Sorry, Mr. Kurzweil, we’re not there yet, and we’re not likely to be any time soon. Modeling a specific person in a brain is orders of magnitude harder than you think it is. At this point, I can’t even say with certainty that I think it will ever happen.

Some thoughts on parasites, ideology, and Malala Yousafzai

This is Malala Yousafzai. As most folks are by now aware, she is a 14-year-old Pakistani girl who was shot in the head by the Taliban for the crime of saying that girls should get an education. Her shooting prompted an enormous backlash worldwide, including–in no small measure of irony–among American politicians who belong to the same political party as legislators who say that children ought to be executed for disrespecting their parents.

I’ve been reading a lot lately about what seems to be two different and at least theoretically unrelated things: parasitology and ideology, specifically religious ideology. This might seem to have nothing to do with Malala Yousafzai’s shooting, but it really isn’t.

When I say I’ve been reading about parasitology, what I mean by that is my Canadian sweetie has been reading to me about parasitology. Specifically, she’s been reading me a book called Parasite Rex, which makes the claim that much of evolutionary biology, including the development of sexual reproduction, is driven by parasites. It’s been a lot of fun; I never knew I’d enjoy being read to so much, even though the subject matter is sometimes kinda yucky.

What’s striking to me is that these things–religious ideology and parasitology–are in some ways the same thing in two different forms.

Parasites make their living by invading a host, then using the host’s resources to spread themselves. To this end, they do some amazing manipulation of the host. Some parasites, for instance, are able to alter a host’s behavior to promote their own spread. Sometimes it’s as crude as irritating the host’s throat to promote coughing which spreads hundreds of millions of virus particles. Other times, it’s as bizarre and subtle as influencing the host’s mind to change the way the host responds to fear, in order to make it more likely that the host will be eaten by a predator, which will then infect itself with the same parasite. In fact, parasitologists today are discovering that the study of life on Earth IS the study of parasites; parasites, more than any other single factor, may be the most significant determinant in the ratio of predator to prey biomass on this planet.

Religious ideology would seem to be a long way off from parasitism, unless you consider that ideas, like parasites, spread themselves by taking control of a host and modifying the host’s behavior so as to promote the spread of the idea.

This isn’t a new concept; Richard Dawkins coined the term ‘meme’ to describe self-replicating ideas decades ago.

But what’s striking to me is how direct the comparison is. The more I learn about parasites, the more I come to believe that parasites and memes aren’t allegories for each other; parasites ARE memes, and vice versa.

We tend to think of parasites like toxoplasma as being real things, and ideas like the salvation of Jesus Christ as being abstract concepts that don’t really exist the same way that real things do. But I don’t think that’s true.

Ideas exist in physical form. It might be as a series of symbols printed in a book or as a pattern of neural connections stored inside a brain, but no matter how you slice it, ideas have a physical existence. An idea that does not exist in any physical way, even as neuron connections wired into a person’s head, doesn’t exist.

Similarly, parasites are information, just like ideas are. A strand of DNA is nothing but an encoded piece of information, in the same sense that a series of magnetic spots on a hard disk are information. In fact, researchers have made devices that use DNA molecules to store computer information, treating banks of DNA as if they were hard drives.

In a sense, ideas and organisms aren’t different things. They are the same thing written into the world in different ways. An idea that takes control of a host’s brain and modifies the host to promote the spread of the idea is like a parasite that takes control of a host and modifies it to spread the parasite. The fact that the idea exists as configurations of connections of neurons rather than as configurations of nucleotides isn’t as relevant as you might think.

We can treat ideas the same way we treat parasites or diseases. We can use the tools of epidemiology to track how ideas spread. We can map the virulence of ideas in exactly the same way that we map the virulence of diseases.

Religion is unquestionably a meme–a complex idea that is specifically designed to spread itself, sometimes at the host’s expense. A believer infected with a religious ideology who kills himself for his belief is no different than a moose infected with a parasite that dies as a result of the infection; the parasite in both cases has hijacked the host, and subverted the host’s own biological existence for its own end.

The more I see the amazing adaptations that parasites have made to help protect themselves and spread themselves, the more I’m struck by how memes, and especially religious memes, have made the same adaptations.

Some parasitic wasps, for example, will create multiple types of larva in a host caterpillar–larva that go on to be more wasps, and larva that act as guardians, protecting the host from infection by other parasites by eating any new parasites that come along. Similarly, religious memes will protect themselves by preventing their host from infection by other memes; many successful religions teach that other religions are created by the devil and are therefore evil, and must be rejected.

We see the same patterns of host resistance to parasites and to memes, too. A host species exposed to the same parasites for many generations will tend to develop a resistance to the parasites, as individuals who are particularly vulnerable to the parasites are selected against and individuals particularly resistant to the parasites are selected for by natural selection. Similarly, a virulent religious meme that causes many of its hosts to die will gradually face resistance in its host population, as particularly susceptible individuals are killed and particularly resistant individuals gain a survival advantage.

Writers like Sam Harris and Michael Shermer talk about how people in a pluralistic society can not really accept and live by the tenets of, say, the Bible, no matter how Bible-believing they consider themselves to be. The Bible advocates slavery, and executing women for not being virgins on their wedding night, and destroying any town where prophets call upon the citizens to turn away from God; these are behaviors which you simply can’t do in an industrialized, pluralistic society. So the members of modern, industrialized societies–even the ones who call themselves “fundamentalists” and who say things like “the Bible is the literal word of God”–don’t really act as though they believe these things are true. They don’t execute their wives or sell their daughters into slavery. The memes are not as effective at modifying the hosts as they used to be; they have become less virulent.

But new or mutated memes, like new parasites, always have the chance of being particularly virulent. Their host populations have not developed resistance. In the Middle East, in places where an emergent strain of fundamentalist Islam leads to things like the Taliban shooting Malala Yousafzai, I think that’s what we’re seeing–a new, virulent meme. islam itself is not new, of course, but to think that the modern strains of Islam are the same as the original is to think that the modern incarnations of Christianity are akin to the way Jesus actually lived; it’s about as far off the mark as thinking a bird is a dinosaur. They share a common heritage, but that’s all. They have evolved into very different organisms.

And this particular meme, this particular virulent strain of Islam, is canny enough to attack its host immune system directly. The Taliban targeted Malala Yousafzai because she favors education for women. Education, in many ways, provides an immunological response to memes; it is no accident that Tammy Faye Bakker famously said that it’s possible to educate yourself right out of a personal relationship with Jesus Christ. It’s no accident that Fundamentalism in all of its guises tends to be anti-intellectual and anti-education.

I’m not saying that the meme of religion (or any other meme) is inherently bad, of course. Memes have different strains; there are varieties of any large religion that are virulent and destructive to their host population, and other strains that are less virulent and more benign.

But with parasitic ideas as with parasitic biological entities, it is important to remember that the goal of the parasite is not necessarily the same as the goal of its host. Parasites attempt to spread themselves, often at the host’s expense. the parasite’s interests are not the host’s interests. Even a seemingly benign meme, such as a meme that says it is important to be nice to each other in order to gain an everlasting reward in heaven, might harm its host species if it siphons away resources to spread itself through churches that might otherwise have been used to, for example, research new cures for cancer. At the more extreme end, even such a benign meme might cause its adherents to say things like “We as a society don’t need to invest in new biomedical nanotechnology to promote human longevity, because we believe that we will live forever if we abide by the strictures of this meme and help to spread it through our works.”

Virulent memes tend to be anti-intellectual, because education is often a counter to their spread. Malala Yousafzai was targeted because she represents the development of an immune response to a virulent, destructive meme that is prevalent in the environment where she was born.

Skeptics and Misogyny and Privilege, Oh My

Since my blog post about the discussion about polyamory on the JREF forums, I’ve been poking around on the forums some more. Somehow, I managed to stumble across a thread relating to accusations of misogyny in the skeptical community, stemming from an episode at TAM last year.

TAM is an annual convention of skeptics and rationalists hosted every year by the James Randi Educational Foundation. It’s one of the largest such conventions in the country.

Apparently, a prominent blogger named Rebecca Watson was harassed at TAM last year. And the fallout from her complaint about it, which I somehow managed to miss almost entirely, are still going on.

I don’t read many skeptic or freethought blogs, which is probably how I missed the first go-round. A bit of scouting on Google, and a perusal of the JREF forum, shows an astonishing amount of anger, most of it of the “how dare this emotional woman tell us we’re misogynists!” variety. Which is more than a bit disappointing, when it isn’t downright rage-inducing.

In the interests of fairness, I have to say that I totally get why folks who identify as skeptics and rationalists might be especially resistant to suggestions that they are behaving inappropriately, especially with regards to sexism. A significant number of folks in the skeptics community identify as atheist. It takes quite a lot of effort for many people, especially people raised in a religious family, to break away from religious faith and embrace the ideas of rationalism and skepticism.

Once you do, there is a temptation to think of yourself as being more enlightened because of it. Things like racism and misogyny? They are those relics of patriarchal religious orthodoxy. I’m not a misogynist! I’m not a racist! I left that behind when I let go of religion. I don’t think that women are placed below men by some sort of divine pronouncement. I’m not the one trying to make women into second-class citizens. How can I be sexist?

I can remember going through a thought process something like this myself, back when I was a teenager in the process of giving up on the idea of religion.

Years later, when I was first introduced to the notion of invisible privilege and the ways that society creates a bubble of special advantages around men, it felt quite weird to grapple with the notion that I might be the beneficiary of misogyny, or even be guilty of misogynic behavior myself, without even being aware of it.

So the reaction of folks in the skeptics community when confronted with inappropriate behavior at a conference might be understandable, though it’s still disappointing. And maybe I’m naive, but the level of vitriol coming from some parts of the skeptics community against Ms. Watson and her supporters is completely over the top…and appalling.

All that is kind of beside the point, though. Yes, it can be tough to recognize the invisible sea advantages that we swim in, just as it might be hard for a fish to recognize that it’s wet.

But here’s the thing. It seems to me that anyone, regardless of whether or not he recognizes the many ways that society provides him with an invisible set of advantages that other people don’t have, who hears someone say “I feel threatened” or “I don’t feel safe here,” should start by listening.

I do believe that most of the folks in the skeptical community–indeed, most people in general–sincerely don’t want to be misogynistic (or racist or otherwise guilty of bias or oppression). And if someone claims to be a rationalist, it seems to me that if he is approached by someone else who says “I feel marginalized in this environment,” the desire to find out whether or not a problem actually exists, and to fix it if it does, should logically outweigh that little emotional voice that says “But that can’t possibly be true; I’m not like that!”

So at this point, I’d like to talk to all the guys reading my blog. Especially white guys, and most especially white guys who think that they aren’t sexist or racist. The rest of you can…I don’t know, cover your ears or something. Ready? Okay.

Listen. Guys. If you are at a conference or a sci-fi convention or something, and someone comes up to you and says “I don’t feel safe here,” you listen. And then you say “I’m sorry to hear that. This isn’t the sort of environment I want to create. What can I do to help fix the situation? What would it look like if this space were more welcoming to you? Have I participated in any way in making this space feel hostile to you, and if I have, what can I do to make it right?”

This is really, really simple It’s called “being a decent human fucking being.”

Now, I know what you’re thinking. It’s probably some little thing that’s gotten way blown out of proportion, right? There’s not really a problem; this person is just being oversensitive. Right?

And that is one possibility, sure.

But seriously? Given the history of treatment of women and minorities in this society, and given how goddamn hard it is to be aware of the advantages you have over folks who aren’t as white or aren’t as male as you are, that probability is pretty goddamn remote. A lot more remote than you think it is.

Doesn’t matter, though. You aren’t going to find out if there’s merit or not if you don’t (a) listen and (b) consider the possibility that there’s some validity to the complaint.

And while we’re at it, let me tell you what you don’t do.

You don’t say “Well, I don’t see a problem here.” That just makes you look like an ass. If there’s a problem with sexism or racism and you’re a white dude, of course you’re not going to see the problem. Duh.

And you don’t say “That doesn’t sound like that big a deal to me.” That just makes you sound like an even bigger ass. If you haven’t had the experience of what it’s like facing constant systematic exclusion–and believe me, as a white dude, you probably haven’t, any more than I have–you’re not really in a position to tell whether or not it’s a big deal.

And seriously, if you say anything, and I do mean anything, along the lines of “All these feminists are just out to get men” or “You’re just being hypersensitive” or, God help you, “you must be on the rag,” you don’t sound like an ass, you ARE an ass. You’re part of the problem. Whether you think of yourself as biased or not, the simple fact that you can think along those lines kinda proves the point. That setting isn’t welcoming because you’re one of the people who is making it that way.

Look, I know it can be hard to acknowledge that you have been given advantages simply by virtue of who you are; I felt the same way. It’s a bit like trying to look at your own back.

But you’re a rationalist, right? C’mon, you can figure this out. Treat it like an intellectual puzzle; that is exactly what it is.

And in the meantime, put aside the emotional response–because that’s what it is, an emotional response, and listen.

Yes, it can be a little tricky to navigate this stuff. So in the interests of helping to promote better understanding for everyone, I’ve created a handy clip-and-fold guidebook that you can print out and carry in your wallet. Clicky on the picture for a PDF version!

Why We’re All Idiots: Credulity, Framing, and the Entrenchment Effect

The United States is unusual among First World nations in the sense that we only have two political parties.

Well, technically, I suppose we have more, but only two that matter: Democrats and Republicans. They are popularly portrayed in American mass media as “liberals” and “conservatives,” though that’s not really true; in world terms, they’re actually “moderate conservatives” and “reactionaries.” A serious liberal political party doesn’t exist; when you compare the Democratic and Republican parties, you see a lot of across-the-board agreement on things like drug prohibition (both parties largely agree that recreational drug use should be outlawed), the use of American military might abroad, and so on.

A lot of folks mistakenly believe that this means there’s no real differences between the two parties. This is nonsense, of course; there are significant differences, primarily in areas like religion (where the Democrats would, on a European scale, be called “conservatives” and the Republicans would be called “radicalists”); social issues like sex and relationships (where the Democrats tend to be moderates and the Republicans tend to be far right); and economic policy (where Democrats tend to be center-right and Republicans tend to be so far right they can’t tie their left shoe).

Wherever you find people talking about politics, you find people calling the members of the opposing side “idiots.” Each side believes the other to be made up of morons and fools…and, to be fair, each side is right. We’re all idiots, and there are powerful psychological factors that make us idiots.

The fact that we think of Democrats as “liberal” and Republicans as “conservative” illustrates one ares where Republicans are quite different from Democrats: their ability to frame issues.

The American political landscape for the last three years by a great deal of shouting and screaming over health care reform.

And the sentence you just read shows how important framing is. Because, you see, we haven’t actually been discussing health care reform at all.

Despite all the screaming, and all the blogging, and all the hysterical foaming on talk radio, and all the arguments online, almost nobody has actually read the legislation signed after much wailing and gnashing into law by President Obama.

And if you do read it, there’s one thing about it that may jump to your attention: It isn’t about health care at all. It barely even talks about health care per se. It’s actually about health insurance. It provides a new framework for health insurance legislation, it restricts health insurance companies’ ability to deny coverage on the basis of pre-existing conditions, it seeks to make insurance more short, it is health insurance reform, not health care reform. The fact that everyone is talking about health care reform is a tribute to the power of framing.

In any discussion, the person who controls how the issue at question is shaped controls the debate. Control the framing and you can control how people think about it.

Talking about health care reform rather than health insurance reform leads to an image in people’s minds of the government going into a hospital operatory or a doctor’s exam room and telling the doctor what to do. Talking about health insurance reform gives rise to mental images of government beancounters arguing with health insurance beancounters about the proper way to notate an exemption to the requirements for filing a release of benefits form–a much less emotionally compelling image.

Simply by re-casting “health insurance reform” as “health care reform,” the Republicans created the emotional landscape on which the war would be fought. Middle-class working Americans would not swarm to the defense of the insurance industry and its über-rich executives. Recast it as government involvement between a doctor and a patient, however, and the tone changed.

Framing matters. Because people, by and large, vote their identity rather than their interests, if you can frame an issue in a way that appeals to a person’s sense of self, you can often get him to agree with you even if by agreeing with you he does harm to himself.

I know a woman who is an atheist, non-monogamous, bisexual single mom who supports gay marriage. In short, she hits just about every ticky-box in the list of things that “family values” Republicans hate. The current crop of Republican political candidates, all of them, have at one point or another voiced their opposition to each one of these things.

Yet she only votes Republican. Why? Because she says she believes, as the Republicans believe, that poor people should just get jobs instead of lazing about watching TV and sucking off hardworking taxpayers’ labor.

That’s the way we frame poverty in this country: poor people are poor because they are just too lazy to get a fucking job already.

That framing is extraordinarily powerful. It doesn’t matter that it has nothing to do with reality. According to the US Census Bureau, as of December 2011 46,200,000 Americans (or 15.1% of the total population) live in poverty. According to the US Department of Labor, 11.7% of the total US population had employment but were still poor. In other words, the vast majority of poor people have jobs–especially when you consider that some of the people included in the Census Bureau’s statistics are children, and therefore not part of the labor force.

Framing the issue of poverty as “lazy people who won’t get a job” helps deflect attention away from the real causes of poverty, and also serves as a technique to manipulate people into supporting positions and policies that act against their own interests.

But framing only works if you do it at the start. Revealing how someone has misleadingly framed a discussion after it has begun is not effective at changing people’s attention because of a cognitive bias called the entrenchment effect.

A recurring image in US politics is the notion of the “welfare queen”–a hypothetical person, invariably black, who becomes wealthy by living on government subsidies. The popular notion has this black woman driving around the low-rent neighborhood in a Cadillac, which she bought by having dozens and dozens of babies so that she could receive welfare checks for each one.

The notion largely traces back to Ronald Reagan, who during his campaign in 1976 talked over and over (and over and over and over and over) about a woman in Chicago who used various aliases to get rich by scamming huge amounts of welfare payments from the government.

The problem is, this person didn’t exist. She was entirely, 100% fictional. The notion of a “welfare queen” doesn’t even make sense; having a lot of children but subsisting only on welfare doesn’t increase your standard of living, it lowers it. The extra benefits given to families with children do not entirely offset the costs of raising children.

Leaving aside the overt racism in the notion of the “welfare queen” (most welfare recipients are white, not black), a person who thinks of welfare recipients this way probably won’t change his mind no matter what the facts are. We all like to believe ourselves to be rational; we believe we have adopted our ideas because we’ve considered the available information rationally, and that if evidence that contradicts our ideas is presented, we will evaluate it rationally. But nothing could be further from the truth.

In 2006, two researchers at the University of Michigan, Brendan Nyhan and Jason Reifler, did a study in which they showed people phony studies or articles supporting something that the subjects believed. They then told the subjects that the articles were phony, and provided the subjects with evidence that showed that their beliefs were actually false.

The result: The subjects became even more convinced that their beliefs were true. In fact, the stronger the evidence, the more insistently the subjects clung to their false beliefs.

This effect, which is now referred to as the “entrenchment effect” or the “backfire effect,” is very common among people in general. A person who holds a belief who is shown hard physical evidence that the belief is false comes away with an even stronger belief that it is true. The stronger the evidence, the more firmly the person holds on.

The entrenchment effect is a form of “motivated reasoning.” Generally speaking, what happens is that a person who is confronted with a piece of evidence showing that his beliefs are wrong will respond by mentally going through all the reasons he started holding that belief in the first place. The stronger the evidence, the more the person repeats his original line of reasoning. The more the person rehearses the original reasoning that led him to the incorrect belief, the more he believes it to be true.

This is especially true if the belief has some emotional vibrancy. There is a part of the brain called the amygdala which is, among other things, a kind of “emotional memory center.” That’s a bit oversimplified, but essentially true; when you recall a memory that has an emotional charge, the amygdala mediates your recall of the emotion that goes along with the memory; you feel that emotion again. When you rehearse the reasons you first subscribed to your belief, you re-experience the emotions again–reinforcing it and making it feel more compelling.

This isn’t just a right/left thing, either.

Say, for example, you’re afraid of nuclear power. A lot of people, particularly self-identified liberals, are. If you are presented with evidence that shows that nuclear power, in terms of human deaths per terawatt-hour of power produced, is by far the safest of all forms of power generation, it is unlikely to change your mind about the dangers of nuclear power one bit.

The most dangerous form of power generation is coal. In addition to killing tens of thousands of people a year, mostly because of air pollution, coal also releases quite a lot of radiation into the environment. This radiation comes from two sources. First, some of the carbon that coal is made of is in the naturally occurring radioactive isotope carbon-14; when the coal is burned, this combines with oxygen to produce radioactive gas that goes out the smokestack. Second, coal beds contain trace amounts of radioactive uranium and thorium, which remain in the ash when it’s burned; coal plants consume so much coal–huge freight trains of it–that the resulting fly ash left over from burning those millions of tons of coal is more radioactive than nuclear waste. So many people die directly or indirectly as a result of coal-fired power generation that if we had a Chernobyl-sized meltdown every four years, it would STILL kill fewer people than coal.

If you’re afraid of nuclear power, that argument didn’t make a dent in your beliefs. You mentally went back over the reasons you’re afraid of nuclear power, and your amygdala reactivated your fear…which in turn prevented you from seriously considering the idea that nuclear might not be as dangerous as you feel it is.

If you’re afraid of socialism, then arguments about health reform won’t affect you. It won’t matter to you that health care reform is actually health insurance reform, or that the supposed “liberal” health care reform law was actually mostly written by Republicans (many of the health insurance reforms in the Federal package are modeled on similar laws written by none other than Mitt Romney; the provisions expanding health coverage for children were written by Republican senator Orrin Hatch (R-Utah); and the expansion of the Medicare drug program were written by Republican Representative Dennis Hastert (R-Illinois)), or that it’s about as Socialist as Goldman-Sachs (the law does not nationalize hospitals, make doctors into government employees, or in any other way socialize the health care infrastructure). You will see this information, you will think about the things that originally led you to see the Republican health-insurance reform law as “socialized Obamacare,” and you’ll remember your emotional reaction while you do it.

Same goes for just about any argument with an emotional component–gun control, abortion, you name it.

This is why folks on both sides of the political divide think of one another as “idiots.” That person who opposes nuclear power? Obviously an idiot; only an idiot could so blindly ignore hard, solid evidence about the safety of nuclear power compared to any other form of power generation. Those people who hate Obamacare? Clearly they’re morons; how else could they so easily hang onto such nonsense as to think it was written by Democrats with the purpose of socializing medicine?

Clever framing allows us to be led to beliefs that we would otherwise not hold; once there, the entrenchment effect keeps us there. In that way, we are all idiots. Yes, even me. And you.

Science Literacy: Of Pickles and Probability


For immediate release: Scientists at the Min Planck Institute announced today that placing a pickle on your nose can improve telekinetic ability.

According to the researchers, they performed a study in which a volunteer was asked to place a pickle on her nose and then flip a coin to see whether or not the pickle would help her flip heads. The volunteer flipped the coin, which came up heads.

“This is a crowning achievement for our research,” the study’s authors said. “Our results show that having a pickle on your nose allows you to determine the outcome of a coin-toss.”

Let’s say you’re browsing the Internet one day, and you come across this report. Now, you’d probably think that there was something hinkey about this experiment, right? We know intuitively that the odds of a coin toss coming up heads are about 50/50, so if someone puts a pickle on her nose and flips a coin, that doesn’t actually prove a damn thing. But we might not know exactly how that applies to studies that don’t involve flipping coins.

So let’s talk about our friend p. This is p.

p represents the probability that a scientific study’s results are total bunk. Formally, it’s the probability that results like the ones observed could occur even if the null hypothesis is true. In English, that basically means that it represents how likely it is to get these results even if whatever the study is trying to show doesn’t actually exist at all, and so the study’s results don’t mean a damn thing.

Every experiment (or at least every experiment seeking to show a relationship between things) has a p value. In the nose-pickle experiment, the p value is 0.5, which means there is a 50% chance that the subject would flip heads even if there’s no connection between the pickle on her nose and the results of the experiment.

There’s a p value associated with any experiment. For example if someone wanted to show that watching Richard Simmons on television caused birth defects, he might take two groups of pregnant ring-tailed lemurs and put them in front of two different TV sets, one of them showing Richard Simmons reruns and one of them showing reruns of Law & Order, to see if any of the lemurs had pups that were missing legs or had eyes in unlikely places or something.

But here’s the thing. There’s always a chance that a lemur pup will be born with a birth defect. It happens randomly.

So if one of the lemurs watching Richard Simmons had a pup with two tails, and the other group of lemurs had normal pups, that wouldn’t necessarily mean that watching Mr. Simmons caused birth defects. The p value of this experiment is related to the probability that one out of however many lemurs you have will randomly have a pup with a birth defect. As the number of lemurs gets bigger, the probability of one of them having a weird pup gets bigger. The experiment needs to account for that, and the researchers who interpret the results need to factor that into the analysis.

If you want to be able to evaluate whether or not some study that supposedly shows something or other is rubbish, you need to think about p. Most of the time, p is expressed as a “less than or equal to” thing, as in “This study’s p value is <= 0.005″. That means “We don’t know exactly what the p value is, but we know it can’t be greater than one half of one percent.”

A p value of 0.005 is pretty good; it means there’s a 0.5% chance that the study is rubbish. Obviously, the larger the p value, the more skeptical you should be of a study. A p value of 0.5, like with our pickle experiment, shows that the experiment is pretty much worthless.

There are a lot of ways to make an experiment’s p value smaller. With the pickle experiment, we could simply do more than one trial. As the number of coin tosses goes up, the odds of a particular result go down. If our subject flips a coin twice, the odds of getting a heads twice in a row are 1 in 4, which gives us a p value of 0.25–still high enough that any reasonable person would call rubbish on a positive trial. More coin tosses still give successively smaller p values; the p value of our simple experiment is given (roughly) by 1/2n, where n is the number of times we flip the coin.

There’s more than just the p value to consider when evaluating a scientific study, of course. The study still needs to be properly constructed and controlled. Proper control groups are important for eliminating confirmation bias, which is a very powerful tendency for human beings to see what they expect to see and to remember evidence that supports their preconceptions while forgetting evidence which does not. And, naturally, the methodology has to be carefully implemented too. A lot goes into making a good experiment.

And even if the experiment is good, there’s more to deciding whether or not its conclusions are valid than looking at its p value. Most experiments are considered pretty good if they have a p value of .005, which means there’s a 1 in 200 chance that the results could be attributed to pure random chance.

That sounds like it’s a fairly good certainty, but consider this: That’s about the same as the odds of flipping heads on a coin 8 times in a row.

Now, if you were to flip a coin eight times, you’d probably be surprised if it landed on heads every single time.

But, if you were to flip a coin eight thousand times, it would be surprising if you didn’t get eight heads in a row somewhere in there.

One of the hallmarks of science is replicability. If something is true, it should be true no matter how many people run the experiment. Whenever an experiment is done, it’s never taken as gospel until other people also do it. (Well, to be fair, it’s never taken as gospel period; any scientific observation is only as good as the next data.)

So that means that experiments get repeated a lot. And when you do something a lot, sometimes, statistical anomalies come in. If you flip a coin enough times, you’re going to get eight heads in a row, sooner or later. If you do an experiment enough times, you’re going to get weird results, sooner or later.

So a low p value doesn’t necessarily mean that the results of an experiment are valid. In order to figure out if they’re valid or not, you need to replicate the experiment, and you need to look at ALL the results of ALL the trials. And if you see something weird, you need to be able to answer the question “Is this weird because something weird is actually going on, or is this weird because if you toss a coin enough times you’ll sometimes see weird runs?”

That’s where something called Bayesian analysis comes in handy.

I’m not going to get too much into it, because Bayesian analysis could easily make a post (or a book) of its own. In this context, the purpose of Bayesian analysis is to ask the question “Given the probability of something, and given how many times I’ve seen it, could what I’m seeing can be put down to random chance without actually meaning squat?”

For example, if you flip a coin 50 times and you get a mix of 30 heads and 20 tails, Bayesian analysis can answer the question “Is this just a random statistical fluke, or is this coin weighted?”

When you evaluate a scientific study or a clinical trial, you can’t just take a single experiment in isolation, look at its p value, and decide that the results must be true. You also have to look at other similar experiments, examine their results, and see whether or not what you’re looking at is just a random artifact.

I ran into a real-world example of how this can fuck you up a bit ago, where someone on a forum I belong to posted a link to an experiment that purports to show that feeding genetically modified corn to mice will cause health problems in their offspring. The results were (and still are) all over the Internet; fear of genetically modified food is quite rampant among some folks, especially on the political left.

The experiment had a p value of <= .005, meaning that if the null hypothesis is true (that is, there is no link between genetically modified corn and the health of mice), we could expect to see this result about one time in 200.

So it sounds like the result is pretty trustworthy…until you consider that literally thousands of similar experiments have been done, and they have shown no connection between genetically modified corn and ill health in test mice.

If an experiment’s p value is .005, and you do the experiment a thousand times, it’s not unexpected that you’d get 5 or 6 “positive” results even if the null hypothesis is true. This is part of the reason that replicability is important to science–no matter how low your p value may be, the results of a single experiment can never be conclusive.

A Taxonomy of Crackpot Ideas

Some time ago, when the anti-science, anti-evolution, religious literalist movie “Expelled” was making the rounds, it occurred to me that a strict 6-day, young-earth creationist idea of the world requires a particular confluence of perceptual filters in order to exist. There has to be an unquestioned acceptance of literalist religious dogma, a profound ignorance of some of the basic tenets of science, and a willingness to believe in a vast, orchestrated conspiracy on the part of all the world’s geologists, biologists, archaeologists, geneticists, and anthropologists in order for this notion to seem reasonable.

I’ve been chewing on that thought for a while, and looking at the perceptive filters that have to be in place to accept any number of implausible ideas, from moon hoaxers to lizard people conspiracy theories to anti-vaccinationism.

And, since making charts is something I do, I plotted some of these ideas in a Venn diagram that shows an overlapping set of prerequisites for a number of different flavors of nuttiness.

As usual, you can click on the image for an embiggened version.

How to Tell when Something Isn’t Science

The process of science–the systematic, evidence-based, rigorous, controlled exploration of the processes of the natural world–has produced an explosion of knowledge and understanding. Since the Italian Renaissance and the Abbasid period in the Persian empire, both of which saw enormous gains in scientific thinking and with them huge leaps in technology and understanding, science has been the beacon of light shining in the darkness of superstition and ignorance.

So it’s probably not too surprising that many folks who seek to embrace all sorts of non-scientific ideas try to claim that their ideas are science. Calling these ideas “science” gives them a stamp of validation. If an idea is scientific, that means it has greater legitimacy in many people’s minds.

And the world needs to cut that shit out. Not all ideas are science, yet everything from phrenology to metaphysics to “crystal energy” tries to clamber onto the scientific bandwagon.

Most recently, the cry of the pseudoscientist has become “Quantum mechanics says!” Folks who can’t actually define what quantum mechanics is are nevertheless eager to fill New Age bookstores with books that claim to “prove” that quantum mechanics validates their ideas.

So here’s a handy-dandy, more-than-pocket-sized guide that will help you tell what science actually is and is not. Ready? Here we go!

RULE 1: If it doesn’t make a precisely defined, testable, falsifiable claim, it is not science.

This is the first and most basic premise of this whole “science” business. If someone claims “Science shows us that” or “Quantum mechanics proves that” and the next thing out of their mouth isn’t a testable, falsifiable claim, then what they’re saying is probably bollocks.

Continue reading