Movie Review: The Hunger Ga^w^w^w Prometheus

Traditionally, Friday is Date Night between zaiah and I. There are certain rituals and traditions we have associated with Date Night, some of which I shan’t go into here, as I fear they may upset those of you with more…delicate sensibilities. One of those traditions which I feel it is safe to discuss is the tradition of watching a movie on Friday.

Usually, this means Netflix, as watching a first-run movie every week would require taking out another mortgage on the house. Occasionally, this means going to the cheap theater for a second-run movie and pizza, which a couple of weeks back is where we saw the Americanized version of The Girl with the Dragon Tattoo, a remake of a Swedish film that was less than half as abominable as I had any right to expect. (As a matter of fact, it was quite good, and even improved on the original in a couple of minor ways…and the original is one of my favorite movies, and a movie I have seen many times.)

This week, we decided to see a first-run movie; namely, The Hunger Games. We made this decision based on two criteria: first, it looks for all the world to be a perfect film to enjoy after an afternoon of extraordinarily kinky sex; and second, zaiah‘s daughter has been bugging us to see it, on the grounds that the book version is her favorite story of the moment and she wanted to speak freely gush enthusiastically about it without worrying about spoilers.

When we went into the theaters, gentle readers, I will confess I had no idea what to expect. I’d vaguely heard of the film, in the sense that I knew its title, but nothing else about it at all.

And then it happened.

They showed us trailers in front of the movie.

For Prometheus.

Which is Ridley Scott’s prequel to Alien.

Which has had a larger impact on my life than any other movie ever committed to film.

Because I saw it one month and two days after turning ten years old.


The movie Alien has been a fixture in my life from a very young age. What I mean by that is that the movie Alien has given me nightmares for approximately two-thirds of my entire life.

I am not quite sure what my parents were thinking, to be honest. In most other regards, they raised me pretty well, and I will thank you in the back there to stop that snickering. However, what on earth would possess otherwise fine, decent, upstanding, tax-paying, non-serial-killer-being grown adults to take a ten-year-old boy to see the movie Alien is quite beyond your humbler chronicler. I say without the slightest trace of exaggeration or hyperbole that it gave me nightmares for more than thirty years, and yes, that does date me.

Seriously. No shit. That movie gave me nightmares for Thirty. Fucking. Years. I can recall one particular occasion, back when I was still working pre-press in Tampa, when I and a buddy of mine were alone in the building, I was tasked with the job of running some film through the automated processor. This basically means carrying a large canister into a room that is pitch black save for the softly glowing readouts on the displays of the automated film processing equipment. And on this particular night, a wandering opossum, I shit you not, fell through the ceiling with quite a loud crash.

It took my coworker and I a couple of hours to catch it. Most of the pursuit was very Keystone Kops, truth be told–the two of us running around through the film strippers’ territory with a big plastic trash can…you don’t want to know. But the bit when it fell through the ceiling? The nightmares had been going into remission then. After that, they came back with redoubled vigor.

Where was I? Oh, yes. Prometheus.

I had planned to write a review of The Hunger Games. Instead, I am going to write a review of Prometheus.

Now, I can hear your questions already. “The movie comes out in June,” you say. “This is only April. You clearly haven’t seen it. How can you write a review of it?”

To that I say, “pish-posh.” It makes no difference if I write the review after I’ve seen it, for I will be just as qualified then as I am now, considering that I am likely to have my hands in front of my face for the entire thing. And yes, Gentle Readers, I am going to see it when it comes out.

So, without further ado…

On with the review!

Cute Female Scientist: Look! We’ve discovered something interesting in these abandoned ruins! Many ancient civilizations on Earth have drawn the same pictograph, even thought they had no contact with each other. And look, it’s a star map!

The Weyland-Yutani Corporation: We would be happy to sponsor an expedition to see what’s up with that.

Audience: Oh, fuuuuuuuck. This isn’t going to end well.

Ridley Scott: It’s a motherfucking Alien prequel. What, you expected My Little Pony?

Sinister Weyland-Yutani dude: I work for the company. But don’t let that fool you. I’m really an okay guy.

Cute Female Scientist: Let’s go!

Captain: We’re here!

They find some REALLY CREEPY STUFF.

Other scientist dude: Man, this is some really creepy stuff.

Yet another scientist dude: I’m getting life signs down there.

Some guy who’s totally insane: Let’s go investigate!

Something REALLY BAD HAPPENS.

Crew of the Prometheus: Something really bad has happened. We need medical attention here.

Something REALLY REALLY REALLY BAD HAPPENS.

The movie GOES BLACK, as I put my HANDS in front of my EYES and curl up into a FETAL POSITION.

Someone on the CREW starts SCREAMING HORRIBLY and DIES.

One of the scientists: Oh, fuuuuu–

Something REALLY REALLY REALLY REALLY BAD HAPPENS.

One of the crew: Wait! I have an idea that could keep this from turning any worse than it already has, and might even save some of us!

The SINISTER WEYLAND-YUTANI DUDE does something UNSPEAKABLE.

Surviving crewmembers: Oh fuuuuuu—

Something EVEN WORSE happens.

People do HEROIC THINGS. It DOESN’T HELP.

Me: Oh fuuuuu–

I have NIGHTMARES for THIRTY MORE YEARS.

Ridley Scott: Pwn3d j00!

By the way, The Hunger Games rocks. Go see it.

Personhood Theory: A Primer

Quite some time ago, I wrote a blog post about the notion of inalienable rights, in which I mentioned the concept of personhood theory, an ethical structure that provides a framework for deciding what is and is not a “person.”

The idea of inalienable rights isn’t necessarily the same as the idea of personhood, though in most moral systems they’re certainly related. Most of us at least recognize the term “human rights,” and tend to think of them as being good things, and something separate from, say, animal rights.

Now, I will grant that the notion of human rights, if history is any example, is more of a pretty sound-bite than anything we as a species actually take seriously.

To quote from one of my favorite George Carlin skits: “Now, if you think you do have rights, one last assignment for you. Next time you’re at the computer, get on the Internet, go to Wikipedia. When you get to Wikipedia, in the search field for Wikipedia, I want you to type in “Japanese Americans 1942, and you’ll find out all about your precious fuckin’ rights, okay? …Just when these American citizens needed their rights the most, their government took ’em away. And rights aren’t rights if someone can take ’em away. They’re privileges. That’s all we’ve ever had in this country, a bill of temporary privileges.”

So it is with some skepticism, leavened with a dash of cynicism, that I talk about the notion of “rights” at all.

However, the fact that we tend not to be very good at respecting things like “human rights” doesn’t mean the idea has no value. In fact, quite the opposite; I think that the notion there are certain things which one simply should not be permitted to do to others, and certain things which all of us ought to be able to expect that we can do, is not only valuable but also absolutely essential–not just in an ethical sense, but in a practical sense too. I believe quite strongly that respecting the idea of “human rights” is not just a moral imperative; it has immediate, utilitarian benefits to the societies which respect them, and the more a society respects these ideas, the better (in many tangible ways) that society becomes.

But that’s a bit off the point. What I actually want to talk about is personhood theory specifically, rather than the idea of rights in general.


In the US these days, the idea of “personhood” has become conflated with the abortion debate. The Religious Right has been advocating the notion of “personhood” as a way to promote an anti-abortion agenda, so when i’ve talked about “personhood theory” in the last few months a lot of folks have assumed that what I’m talking about is abortion.

Personhood theory as an ethical framework isn’t (directly) related to abortion at all. As an ethical principle, the idea behind personhood theory is pretty straightforward: “Personhood,” and with it all the rights that we now call “human rights,” belongs to any sapient entity.

Put most simply, that means that a hypothetical intelligent alien organism, a hypothetical “strong” AI, a person whose consciousness has been transferred into a computer, or an animal that has been modified to be sapient would all qualify as “people” and would be entitled to the rights and responsibilities of people, just like you or I.

Now, there is one potential pickle in this definition, of course, and that’s in the notion of sapience.

It’s impossible to prove that a computer, or an uploaded person, or even your next door neighbor down the street is sapient. We can apply the Turing test to a computer to see if it can converse fluently and flexibly enough to be indistinguishable from a human being, but that presupposes that artificial intelligence would be similar to natural intelligence, which isn’t necessarily so. We can test generalized problem-solving capability, though it’s possible to imagine that what looks to be intelligent problem-solving is actually brute-force, blind pattern matching done very quickly, of the kind that a computer chess-playing program does.

But ultimately, it may not really matter that we can’t ever come up with a way to step into the subjective experience of an alien or an uplifted animal or a computer and say that it is sapient, because we can’t do that with a person, either.

I can’t be absolutely, 100% certain that I am not the only person in the world with self-awareness and a rich subjective internal experience. It might be that my neighbor and the clerk at the convenience store down the street and the cute blond lesbian with facial piercings who used to work in the sandwich shop near me are actually “philosophical zombies,” utterly devoid of any internal experience, repeating words and phrases, paying taxes, doing their jobs only through some kind of incredibly complex clockwork. It doesn’t matter because when I make ethical decisions, the negative effects of assuming everyone else to be an empty clockwork shell, should I be wrong, are much more profound than the ethical consequences if I assume that they are aware, living people and I am wrong. The ethical principle of least harm demands that if they seem to be people, I treat them as people. The alternative is sociopathy.

The same moral logic applies to uploaded people and smart computers. No, I can not objectively prove that they are self-aware entities instead of fabulous automatons, so basic ethics demand that if they appear to be self-aware entities, I treat them as I would treat self-aware entities.

All this is, I believe, a pretty straightforward idea. But the concept of personhood theory often runs off the rails when people, particularly socially or religious people, talk about it, for reasons that I find very, very interesting.


The arch-conservative, Creation “Science” Discovery Institute says of personhood theory, “In this new view on life, each human being doesn’t have moral worth simply and merely because he or she is human, but rather, we each have to earn our rights by possessing sufficient mental capacities to be considered a person. Personhood theory provides moral justification to oppress and exploit the most vulnerable human beings.”

An that takes a similar approach article in SFGate says, “Relying on personhood instead of humanhood as the fundamental basis for determining moral worth threatens the lives and well-being of the most defenseless and vulnerable humans among us. Here’s why: In personhood theory, taking life is only wrong if the being killed was a “person” who wanted to remain alive. […] Basing public policy on such theories leads to very dark places. Some bioethicists justify the killing of Alzheimer’s patients and infants born with disabilities. Others suggest that people in comas can be killed and their organs harvested if their families consent, or used in medical experiments in place of animals.”

Self-described ethicist Wesley J. Smith, who has worked with the Discovery Institute, claims that personhood theory is nothing more than an attempt to legalize infanticide: “‘After-Birth Abortion’ is merely the latest example of bioethical argument wielded as the sharp point of the spear in an all-out philosophical war waged among the intelligentsia against Judeo/Christian morality based in human exceptionalism and adherence to universal human rights. In place of intrinsic human dignity as the foundation for our culture and laws, advocates of the new bioethical order want moral value to be measured individual-by-individual — whether animal or human — and moment-by-moment. Under this view, we each must earn full moral status by currently possessing capacities sufficient to be deemed a ‘person.'”

Now, I will admit that when I first heard of some of these objections to personhood theory, I was absolutely gobsmacked. It seemed beyond all reason to misinterpret and misrepresent what, to me, seemed like such a simple idea in such a profound way.

But the more I thought about it, the more it made sense that people would interpret personhood theory in such a bizarre, backwards way…because the principle idea simply does not fit into their conceptual worldview. They interpret the idea incorrectly because their frame of reference doesn’t permit them to view it as it was intended.


The gist of personhood theory is expansive. It expands the conventional definition of “person” beyond “human,” to include a number of hypothetical non-human entities, should they ever exist. Personhood theory says “It’s not just human beings who are persons; anything which is sapient is a person, too.”

The objections to personhood theory see it as a constrictive or limiting framework. This suggests to me that these objections betray a worldview in which human beings are the only things which are persons, so any definition of the word “person” that is not “a human being” must necessarily limit personhood to only a subset of human beings.

It is trivially demonstrable, even if we can not objectively state with absolute certainty, that something is sapient, that all of us at some time or another are not sapient. A human being who is under general anesthesia would fail any test for sapience, or indeed awareness of any sort. A sleeping person is less sentient than an awake dog. I myself am rarely sapient before 9 AM under the best of circumstances. (It is beyond the scope of this discussion to ponder whether a person who is in an irreversible coma or whose mind has been destroyed by Alzheimer’s still has the same rights as any other person; whether or not things like euthanasia are ethical is irrelevant to the concept of personhood theory as I am discussing it.)

Personhood theory, at least in its original formulation, clearly applies only to classes of entities, not to individuals within a class. So for example, human beings are sapient, regardless of the fact that each of us experiences transient non-sapience from time to time; ergo, human beings are people. Strong AIs, if they ever exist, would (by definition) be sapient, even if individual AIs themselves were to be disabled or shut down or whatever; therefore, strong AIs are people.

Personhood theory as a construct works on a general, not an individual, level. No transhumanist or bioethicist who talks about personhood theory proposes that it can be used to justify shooting sleeping people on the basis that they aren’t sapient and are therefore not really people; such an interpretation is, on the face of it, absurd. (I will leave it as an exercise to the reader as to whether or not it’s more absurd than the notion that dinosaurs lived in the Garden of Eden and were present on Noah’s ark.)

Rather, transhumanists and bioethicists who talk about personhood theory–at least in my experience–use it as a way to construct some sort of system for deciding who else gets “human” rights in addition to human beings, with the obvious candidates being the ones I’ve mentioned.

There is, though I hate to say this, particular irony in Wesley Smith’s talk of “Judeo/Christian morality based in human exceptionalism and adherence to universal human rights,” considering the Judeo/Christian track record on such issues as slavery. “Universal human rights,” in the Judeo/Christian literature, are anything but universal. The cynic in me is reluctant to place the application of universal rights to anyone, much less non-human entities, in the care of conservative guardians of Judeo/Christian morality.

It took quite a long time for people to figure out that human beings with a different color of skin were people; the Southern Baptist Convention was doctrinally white supremacist until after WWII, and the Mormon church was doctrinally white supremacist until 1977. To this very day, the Discovery Institute seeks to deny “universal human rights” to gays and lesbians, using one of the most bizarre chains of logic I’ve ever witnessed outside of questions about how we know dinosaurs and human beings shared the same space at the same time.

I frankly do not envy the first uploaded person or the first true AI. Any non-human sapience will, if history is any guide, have a rough time being treated as anything other than property. The people who object to personhood theory because they see it as a constriction rather than an expansion of the idea of personhood are, I think, quite literally incapable of recognizing the personhood of something like an AI; it exists so far outside their worldview that the argument doesn’t even seem to make sense to them.

And in a world where strong AI exists, I fear for what that means for us, and what that says about our abilities as moral entities.

A Taxonomy of Fallacies

As anyone who reads this blog regularly knows, I’m a big fan of Venn diagrams. Lately, I’ve been thinking quite a lot about cognitive errors, errors in reasoning, and logical fallacies, for reasons which only coincidentally happen to coincide with the political primary season–far be it from me that the one might be in any way whatsoever connected to the other.

Anyway, I’ve put together a simple taxonomy of common fallacies. This is not, of course, an exhaustive list of fallacies; compiling such a list would surely try the patience of the most saintly. It is, however, intended to show the overlap of argumentative fallacies (arguments which by their nature and structure are invalid), logical fallacies (errors in logical reasoning), and cognitive biases (errors of human reason and our general cognitive processes).

As usual, you can clicky on the picture to embiggen it.

A quick and dirty overview of the various fallacies on this chart:

Ad Homenim: A personal attack on the person making an argument. “You’re such a moron! Only an idiot would think something like that.”

Loaded Question: An argument which presupposes its own answer, presupposes one of its own premises, or presupposes some unsupported assumption in the way it’s phrased. `”Have you stopped beating your wife yet?”

Appeal Tu Quoque: Tu quoque literally means “you also.” It’s an argument that attempts to discredit an argument not on the basis of how valid the argument is, but on the basis of some perceived inconsistency or hypocrisy in the person making it. “You say that a vegetarian diet is more healthy than a diet that is rich in red meats, but I’ve seen you eat steak so you clearly don’t even believe your own argument. Why should I?”

Guilt By Association: Also called the “association fallacy,” this is an argument which asserts that an association exists between two things which means they belong to the same class. It can be made to discredit an argument by attacking the person making it (“Bob says that we should not eat meat; the Radical Animal Liberation Terror Front supports Bob’s argument; therefore, Bob’s argument is invalid”) or to create an association to support an assertion that can not be supported on its own merits (“John is black; I was mugged by a black person; therefore, John can not be trusted”).

Straw Man: An argumentative technique that ignores a person’s actual argument and instead rebuts a much weaker argument that seems related to the original argument in some way (“Bob thinks we should treat animals with respect; the idea that animals are exactly the same as people is clearly nonsense”).

False Analogy: An argumentative technique that creates an analogy between two unrelated things and then uses the analogy to attempt to make an assertion (“The government is like a business. Since the function of a business is to make money, the government should not enact policies that do not generate revenue”).

Cherry Picking: A tactic which presents only information that supports an argument, even if other information doesn’t support it, or even the information which is presented is shown out of context to make it appear to support the argument (“Vaccination causes autism. Andrew Wakefield published one paper that shows vaccination causes autism, so it must be so–even though hundreds of other experiments and published papers show no connection, and Wakefield’s paper was determined to be fraud and retracted”).

Just World Fallacy: The tendency to believe that the world must be just, so that when bad things happen the people who they happen to must have done something wrong to bring them about, and when good things happen, the person who they happened to must have earned them. It’s both a cognitive bias (we tend to see the world this way on an emotional level even if we consciously know better) and an argumentative tactic (for example, a defense attorney defending a rapist might say that the victim was doing something wrong by being out at night in a short dress, and therefore brought the attack upon herself). Part of what makes this so cognitively powerful is the illusion of control it brings about; when we believe that bad things happen because the people they happened to were doing something wrong, we can reassure ourselves that as long as we don’t do anything wrong, those things won’t happen to us.

Appeal to Probability: An argumentative tactic in which a person argues that because something could happen, that means it will happen. Effective in large part because the human brain is remarkably poor at understanding probability. “I might win the lottery; therefore, I simply need to play often enough and I am sure to win, which will solve all my money problems.”

Fallacy of False Dichotomy: Also called the “fallacy of false choice” or the “fallacy of false dilemma,” this is an argumentative fallacy that sets up the false premise that there are only two possibilities which need to be considered when in fact there are more. “Either we cut spending on education or we rack up a huge budget deficit. We don’t want a deficit, so we have to cut spending on education.”

Fallacy of Exclusive Premises: Also called the “fallacy of illicit negative,” this is a logical and argumentative fallacy that starts with two negative premises and attempts to draw an affirmative conclusion: “No registered Democrats are registered Independents. No registered Independents vote in a closed primary. Therefore, no registered Democrats vote in a closed primary.”

Appeal to Ignorance: Also called the “argument from ignorance,” this is a rhetorical device which asserts that an argument must be true because it hasn’t been proven to be false, or that it must be false because it hasn’t been proven to be true (“we can’t prove that there is life in the universe other than on our own planet, so it must be true that life exists only on earth”). Many arguments for the existence of a god or of supernatural forces take this form.

Affirming the Consequent: A logical fallacy which asserts that a premise must be true if a consequence of the premise is true. Formally, it takes the form “If P, then Q; Q; therefore P” (for example, “All dogs have fleas; this animal has fleas; therefore, this animal is a dog”).

Denying the Antecedent: A logical fallacy that asserts that some premise is not true because a consequent is not true. Formally, it takes the form “If P, then Q; not P; therefore, not Q.” For example: “If there is a fire in this room, there must be oxygen in the air. There is no fire in this room. Therefore, there is no oxygen in the air.”

Affirming the Disjunct: Sometimes called the “fallacy of false exclusion,” this logical fallacy asserts that if one thing or another thing might be true, and the first one is true, that must mean the second one is false. For example, “Bob could be a police officer or Bob could be a liar. Bob is a police officer; therefore, Bob is not a liar.” The fallacy asserts that exactly one or the other must be true; it ignores the fact that they might both be true or they might both be false. (Note that in Boolean logic, there is an operator called “exclusive or” or “XOR” which does mean that either one thing or the other, but not both, could be true; this is not related to the logical fallacy of affirming the disjunct.)

Fallacy of Illicit Affirmative: This is the flip side of the fallacy of exclusive premises. It affirms a negative consequent from two affirmative statements. “All true Americans are patriots; some patriots are willing to fight for their country; therefore, there must be some true Americans who aren’t willing to fight for their country.”

Fallacy of Undistributed Middle: A logical fallacy that asserts that X are Y; something is a Y; therefore, something is an X. For example, “All Southern Baptists are Christians; Bob is a Christian; therefore, Bob is a Southern Baptist.” This fallacy ignores the fact that “all X are Y” does not imply that all Y must be X.

Base Rate Fallacy: A logical fallacy that involves failing to apply general information about some statistical probability (the “base rate” of something being true) to a specific example or case. For example, given information which says that HIV is three times more prevalent among homosexuals than heterosexuals, and given the information that homosexuals make up 10% of the population, most people who are told “Bob has HIV” will erroneously conclude that it is quite likely that Bob is gay, because they will consider only the fact that gays are more likely to have HIV but will not consider the “base rate” that gays make up a relatively small percentage of the population. This fallacy is extremely easy to make because of the fact that the human brain is so poor at understanding statistics and probability.

Post Hoc Ergo Propter Hoc: This is Latin for “After the fact, therefore, because of the fact.” Sometimes called the “fallacy of false cause,” it’s a logical fallacy which asserts that if one thing happens and then something else happens, the first thing caused the second thing to happen (“My child had a measles vaccine; my child was diagnosed with autism; therefore, the vaccine caused the autism”). Our brains are highly tuned to find patterns and to seek causation, to the point where we often see it even when it does not exist.

Regression Bias: This is a fallacy that’s closely related to the post hoc, ergo propter hoc fallacy in that it ascribes a false cause to an event. In this particular case, things which normally fluctuate statistically tend to return to a mean; a person may see cause in that regression to the mean even where none exist. For example, “Bob had an amazing string of successes when he was playing basketball. Then he appeared on the cover of Sports Illustrated. Afterward, his performance was more mediocre. Therefore, appearing on the cover of the magazine must have caused him to perform more poorly.” Since even good athletes will generally return to their baseline after particularly exceptional (or particularly poor) performance, appearing on the cover of the magazine is likely to be unconnected with the athlete’s performance regressing to that athlete’s normal baseline.

Argumentum Ad Nauseam: A rhetorical strategy in which a person continues to repeat something as true over and over again, even after it has been shown to be false. Some radio commentators are particularly prone to doing this: “Sandra Fluke wants the taxpayers to pay for contraception. She argues that it is the responsibility of the taxpayer to pay for her contraception. Sandra Fluke believes that contraception should be paid for by the taxpayer.”

Argument from Scripture: An argument which states that if some element in a source being cited is true, then the entire source must be true. This fallacy does not apply exclusively to holy texts or Biblical scriptures, though it is very often committed in religious arguments.

Begging the Question: Similar to the loaded question fallacy, this is an argument in which some argument assumes its own premise. Formally, it is an argument in which the conclusion which the argument claims to demonstrate is part of the premise of the argument. “We know that God exists because we see in nature examples of God’s design.” The premise of this argument assumes that nature is designed by God, which is the conclusion that the argument claims to support.

Circular Argument: This argumentative tactic is related to begging the question, but slightly different in that it uses one argument to claim to prove another, then uses the truth of the second argument to support the first. A lot of folks consider circular reasoning to be the same thing as begging the question, but they are slightly different in that the fallacy of begging the question contains the conclusion of an argument as one of its premises, whereas circular reasoning uses argument A to prove argument B, and then, having proven argument B to be true, uses argument B to prove argument A.

Appeal to Emotion, Force, or Threat: An argumentative tactic in which, rather than supplying evidence to show that an argument is correct, the person making the argument attempts to manipulate the audience’s emotions (“You must find Bob guilty of this murder. If you do not find him guilty, then you will set a dangerous murderer free to prey on your children”).

False Attribution: An argument in which a person attempts to make a position sound more credible either by attributing it to a well-known or respected source, or using a well-known and respected source’s comments out of context so as to create a false impression that that source supports the argument. As Abraham Lincoln said, more than 90% of the quotes used to support arguments on the Internet can’t be trusted!

Association Fallacy: A generalized form of the fallacy of guilt by association, an association fallacy is any argument that makes any assertion that some irrelevant similarity between two things demonstrates that those two things are related. “Bob is good at crossword puzzles. Bob also likes puns. Therefore, we can expect that Jane, who is also good at crossword puzzles, must like puns too.” Because our brains are efficient at categorizing things into groups, we are often prone to believing that categorizations are valid even when they are not.

Vividness Fallacy: Also called the “fallacy of misleading vividness,” this is the tendency to believe that especially vivid, dramatic, or exceptional events are more relevant or more statistically common than they actually are, and to pay special attention or attach special weight to such vivid, dramatic events when evaluating arguments. A common rhetorical strategy is to use vivid examples to create the impression that something is commonplace when it is not: “In New Jersey, a Viet Nam veteran was assaulted in a bar. In Vermont, an Iraqi vet was mugged at knifepoint. American citizens hate veterans!” It is effective because of a cognitive bias called the “availability heuristic,” which causes us to misjudge the statistical importance of an event if we can think of examples of that event.

Entrenchment effect: Also called the “backfire effect,” this is a tendency of people who, when presented with evidence that disproves something they think is true, will often tend to form a greater attachment to the idea that it must be true. I’ve written an essay about framing and entrenchment here.

Sunk Cost Fallacy: An error in reasoning or argument which holds that if a certain investment has been made in some course of action, then the proper thing to do is continue on that course of action so as not to waste that investment, even in the face of evidence that shows that course of action to be unlikely to succeed. In rhetoric, people will often make arguments to support a tenuous position on the basis of sunk cost rather than on the merits of the position; “We should continue to invest in this weapons project even though the engineers say it is unlikely to work because we have already spent billions of dollars on it, and you don’t want that money to be wasted, do you?” These arguments often succeed because people form emotional attachments to a position in which they feel they have made some investment that is completely detached from the value of the position itself.

Appeal to Authority: Also known as the argument from authority, this is an argument that claims that something must be true on the basis that a person who is generally respected or revered says it is true, rather than on the strength of the arguments supporting that thing. As social animals, we tend to give disproportionate weight to arguments which come from sources we like, respect, or admire.

Black Swan Effect: Also called the black swan fallacy, this is the tendency to discount or discredit information or evidence which falls outside a person’s particular range of experience or knowledge. It can take the form of “I have never seen an example of X; therefore, X does not exist;” or it can take a more subtle form (called the “confirmation fallacy”) in which a statement is held to be true because no counterexamples have been demonstrated (“I believe that black swans do not exist. Here is a swan. It is white. Here is another swan. It is also white. I have examined millions of swans, and they have all been white; with all these examples that support the idea that black swans do not exist, it must be a very reliable statement!”).

Confirmation Bias: The tendency to notice, remember, and/or give particular weight to things that fit our pre-existing beliefs; and to not notice, not remember, and/or not give weight to anything that contradicts our pre-existing beliefs. The more strongly we believe something, the more we notice and the more clearly we remember things which support that belief, and the less we notice things which contradict that belief. This is one of the most powerful of all cognitive biases.

Attention Bias: A cognitive bias in which we tend to pay particular attention to things which have some sort of emotional or cognitive resonance, and to ignore data which are relevant but which don’t have that resonance. For example, people may make decisions based on information which causes them to feel fear but ignore information that does not provoke an emotional response; a person who believes “Muslims are terrorists” may become hyperaware of perceived threatening behavior from someone he knows to be Muslim, especially when that perception reinforces his belief that Muslims are terrorists, and ignore evidence which indicates that that person is not a threat.

Choice Supportive Bias: The tendency, when remembering a choice or explaining why one has made a choice, to believe that the choice is better than actually was, or to believe that other options are worse than they actually were. For example, when choosing one of two job offers, a person may describe the job she chose as being clearly superior to the job she did not accept, even when both job offers were essentially identical.

Expectation Bias: Also sometimes called “experimenter’s bias,” this is the tendency of people to put greater trust or credence in experimental results which confirm their expectations than in results which don’t match the expectations. It also shows in the tendency of people to accept without question evidence which is offered up that tends to support their ideas, but to question, challenge, doubt, or dismiss evidence which contradicts their beliefs or expectations.

Pareidolia: The tendency to see patterns, such as faces or words, in random stimuli. Examples include people who claim to see the face of Jesus in a piece of toast, or who hear Satanic messages in music albums that are played backwards.

Rhyming Effect: The tendency of people to find statements more credible if they rhyme than if they don’t. Yes, this is a real, demonstrated cognitive bias. “If the glove don’t fit, you must acquit!”

Framing effect: The tendency to evaluate evidence or to make choices differently depending on how it is framed. I’ve written an essay about framing and entrenchment here.

Ambiguity Effect: The tendency of people to choose a course of action in which they know the exact probability of a positive outcome over a course of action in which the exact probability is not known, even if the probability of a positive outcome is generally somewhere around the same, or if the possible positive outcome is better. There’s an interactive demonstration of this effect here.

Fortunetelling: The tendency to make predictions about the outcome of a choice, and then assume that the prediction is true, and use the prediction as a premise in arguments to support that choice.

Things that make you go “hmm:” Drink French

Last time zaiah and I went downtown to visit her girlfriend, we came upon this billboard, which I had to take a picture of.

I’m not entirely sure what the marketing angle of this advertising campaign is. It seems like they’re comparing the decadence of their French champagne to the decadence of French women, maybe who have (at least here in the States) a reputation for casual hedonism. So if you drink their champagne you’ll, I don’t know, experience casual hedonism too, or shop how uninhibited you are, or something. I’m not quite sure.

But, see, here’s the problem: In the US at least, the combination of alcohol and passive-looking women creates an unfortunate, date-rapey subtext. I don’t think that subtext is intentional in this billboard–at least I hope it’s not–but it’s a bit hard to believe that whoever designed it was quite so tone-deaf.

Am I missing something here?

I Love Sex and I Vote

A short while ago, i blogged that I was dusting off my “I Love Sex and I Vote bumper sticker in honor of Rush Limbaugh and the Religious Right’s appalling, savage attack on women.

I made the bumper sticker back in the early days of the George W. Bush Presidency, convinced that the right-wing’s attack on women generally and sex specifically was about as bad as it could get. I’m sorry to report that I was very, very wrong on that count.

Since I re-blogged the bumper sticker and the “I Love Sex and I Vote” icons that go with it, I’ve had a request to make the icons available in different sizes. I’ve also had several readers email or message me to ask if I would be willing to make more than just a bumper sticker with the logo on it.

So I’m pleased to announce I’ve set up a CafePress shop with all sorts of things featuring this logo, in addition to the bumper sticker. I’ve also put the original (100 by 100 pixel and 80 by 80 pixel) icons, as well as two new icons with a blue border available in a lot more sizes.

I think it’s important this election cycle to take a visible stand that slut-shaming is not an acceptable part of our society. It’s time to push back against people who believe that it’s okay to shame people, especially women, for the “crime” of enjoying sex.

Feel free to use the icons below for whatever you’d like.


350 x 350 pixels


250 x 250 pixels


200 x 200 pixels


150 x 150 pixels

          
100 x 100 pixels

          
80×80 pixels

Boston Chapter 12: Pachyderms Gone Wild

I didn’t expect the elephants.

The day with figmentj was quite lovely, featuring strolls through Boston Commons and antique naval mines and Christian hair metal and frightening clocks with plastic women in chains and all manner of other parts of the general Boston experience. We ended it by watching the sun set over the river by Harvard square, the peaceful tranquility of the flame-touched sky broken only by the screams of the boating-slaves rowing along the river under the gentle exhortation of scourge and lash:

But all such things must come to an end. Eventually, it was time to head back to Claire’s to help her get settled in to her new digs with all her various and sundry possessions, including her rather fetching Viking helmet, before sailing through the air in a magic metal tube back to Portland.

I am sad to say, Gentle Readers, that once again I failed to seize the opportunity to photograph Viking kazoo porn. I did, however, assist in the moving of many heavy mass-bearing objects up many flights of stairs, which, while undoubtedly not as great an accomplishment as photographing Viking kazoo porn, is an accomplishment nonetheless.

The next morning, Claire had to be on campus early for a meeting of some sort or another, where her indoctrination into the ways of life at Tufts University would begin. I had resolved to seize the morning and make the best possible use of it by remaining asleep while she got up to head to campus. I had resolved to head there myself, at a much more reasonable hour, as I am generally a much more reasonable man than those who profane the early hours with wakefulness, and to meet Claire there when her indoctrination session had concluded.

I didn’t, as I have mentioned, expect the elephants.

Tufts University is not, by the strictest definition, a clown college. Indeed, it excels in the pursuit of many intellectual endeavors unrelated to clowns. Its history, however, is tightly woven with that of the clown, as one of its earliest benefactors (and a source of much if its early funding) was none other than Pt. T. Barnum of “There’s a sucker born every minute” fame. He made his considerable living, though presumably not his endowments, on the back of that maxim.

In 1889, Barnum, when asked by the university’s trustees what he could do to help support the institution in a time of great need, donated the stuffed and mounted hide of one of his most beloved circus animals, Jumbo the Elephant, to the school. There is a parallel to this take in the Islamic story of the archangel Gabriel, who gifted the patriarch Abraham with an enormous stone called the Caaba which later became the object toward which Muslims now bow when they pray, when (as Ambrose Bierce observed) the patriarch had perhaps asked for a loaf of bread.

Just as the Caaba has exerted a magnetic influence on the Islamic tradition ever since, so has Jumbo the elephant had a similar effect on Tufts University, which might have taken its name from the small tuft of coarse fur that adorns the end of an elephant’s tail. The stuffed and mounted elephant was subsequently destroyed in a fire, or so the legend goes, and every bit of it was lost except for the tuft at the end of the tail. The symbolic meaning of this will be left as an exercise to the reader.

There are elephants everywhere on campus. Not actual elephants, you understand, but their representations, in painting and sculptures and mixed media and all matter of other forms too frightful to mention. The pachyderm permeates the collective subconscious at Tufts like gay porn permeates that of an Evangelical minister; and, as with that selfsame Evangelical minister, it leaks out everywhere.

I discovered a strange building during my exploration of the university grounds: a vast building, filled with miles and miles of shelves, along which were stored bound volumes of printed pages. It was kind of like an iPad, if you took all the books in it and, for some reason, printed them out. Prominent in that building was this picture of the Elephant Rampant, trumpeting its victory cry after seeing its enemies scattered before it like leaves in the wind, their broken bodies staining the ground red with their blood and the tears of the widows:

I also witnessed this man, perhaps convicted of some terrible crime, fleeing the Hall of Justice with its symbolic sculpture of an elephant triumphant as its keystone, ahead of the swift and brutal Justicars who will in a few minutes’ time come from these very same doors to run him down, for the education and amusement of the rest of the campus:

The position of the Justicars in the annals of Tufts history is long and interesting. The universally has a long-standing rivalry with its ancient enemy, Bowdoin College in Maine. This rivalry in times long past was fought at a low level, with skirmishers from each college occasionally leaving the walls and fences of their respective universities to stage raids on their adversary’s stronghold under cover of night.

However, in the black days of May in 1897, a large and powerful force ventured from Bowdoin on the path of war. They arrived in the afternoon of May 23rd, and the three days that followed are too grim for the telling. In the aftermath, the Tufts ruling council created an elite group of warriors, the Justicars, whose elephant-shaped masks of cold iron and moon-forged silver soon struck fear into the hearts of all who stood against the university.

Today, the battles between Tufts and Bowdoin are fought in ritual combat on the fields of the gladiators, and the Justicars serve a different function.

Down the hill a bit, one can find these sculptures, erected as reminders of the ever-watchful vigilance and benevolent care of the Great Pachyderm:

And further down still, near the university’s main temple, the idol of the Great Jumbo stands proudly. Offerings of the people adorn his side, written in the secret ancient script.

My journey to the Tufts campus was not without some small difficulties. Claire’s new home is located a short walk from the grounds. I am not, as those of you who have read some of my previous adventures, gifted with a good sense of direction, or even a sense of direction at all. It is only through conscientious memorization of routes and landmarks that I can find my way from the kitchen to the bedroom.

So I will confess a measure of consternation with the prospect of finding my way unaided from her new home to the university. Fortunately, through an accident of geography, Tufts University is located at the pinnacle of the only hill in the neighborhood. “Keep going up,” she said. “As long as you’re going up, you can’t miss it.”

Her advice was outstanding. With the mountain itself as my guide, I am pleased to report, Gentle Readers, that I became lost only twice, and did not walk more than half a mile out of my way.

Close to the edge of the campus, one finds this sprawling mansion, the home of the President of the University.

The low hedge you see ringing the house’s grounds are all that remain of the high, impregnable walls that once stood on that very spot, erected in haste after the attempted coup of 1979.

In that year, the grad students, who spend long and dangerous days in the eternal darkness of the mines beneath the campus, working day and night with pickaxe and shovel mining nuggets of knowledge in dangerous conditions for little pay, and whose labors made the university’s High Council and its president rich beyond dreams of avarice, rebelled. They seized the entrance to the mines and lay siege to the residence of the President himself before the rebellion was crushed.

Today, the relationship between the ruling class and the university’s laborers is much less tense. The laborers have negotiated contracts with the ruling elite that grant them some small measure of hope of upward social mobility, while the elite have largely been able to retain their fabulous wealth and their odalisques, who calm the fears of the aristocracy with the gentle applications of the feminine arts.

It was in bemused reflection of those arts that Claire later found me, wandering the campus without purpose or goal. We dined upon local delicacies called “burritos” and “quesadillas” that afternoon, before returning to her new home to engage in the rearrangement of things and stuff into a more suitable configuration.

The next day found us once more in her car, the vehicle whose stalwart service had seen us across an entire continent, through deep canyons and soaring mountains, past strange landscapes teeming with Mormons and the Guatemalans who’d carried off one of our own, this time on a shorter journey to the airport.

I have flown, through strange happenstance of fate and circumstance, many times in the past couple of years. Each time, I have flown on Delta Air Lines, and each time, my luggage has come out somewhat the worse for the wear. On my trip back from Europe, documented many posts ago, the handle was broken–somewhere, if I recall correctly, between Amsterdam and London. On this trip, I arrived in Portland to find a large gash ripped in the front of the suitcase.

It was not until my next journey, to London and King’s Lynn for a debauched celebration of birthday hedonism, that my faithful suitcase would receive a terminal blow. That is a story for another time.

Some thoughts on post-scarcity societies

One of my favorite writers at the moment is Iain M. Banks. Under that name, he writes science fiction set in a post-scarcity society called the Culture, where he deals with political intrigue and moral issues and technology and society on a scale that almost nobody else has ever tried. (In fact, his novel Use of Weapons is my all-time favorite book, and I’ve written about it at great length here.) Under the name Iain Banks, he writes grim and often depressing novels not related to science fiction, and wins lots of awards.

The Culture novels are interesting to me because they are imagination writ large. Conventional science fiction, whether it’s the cyberpunk dystopia of William Gibson or the bland, banal sterility of (God help us) Star Trek, imagines a world that’s quite recognizable to us….or at least to those of us who are white 20th-century Westerners. (It’s always bugged me that the alien races in Star Trek are not really very alien at all; they are more like conventional middle-class white Americans than even, say, Japanese society is, and way less alien than the Serra do Sol tribe of the Amazon basin.) They imagine a future that’s pretty much the same as the present, only more so; “Bones” McCoy, a physician, talks about how death at the ripe old age of 80 is part of Nature’s plan, as he rides around in a spaceship made by welding plates of steel together.


Image from Wikimedia Commons by Hill – Giuseppe Gerbino

In the Culture, by way of contrast, everything is made by atomic-level nanotech assembly processes. Macroengineering exists on a huge scale, so huge that the majority of the Culture’s citizens by far live on orbitals–artificially constructed habitats encircling a star. (One could live on a planet, of course, in much the way that a modern person could live in a cave if she wanted to; but why?) The largest spacecraft, General Systems Vehicles, have populations that range from the tens of millions ot six billion or more. Virtually limitless sources of energy (something I’m panning to blog about later) and virtually unlimited technical ability to make just about anything from raw atoms means that there is no such thing as scarcity; whatever any person needs, that person can have, immediately and for free. And the definition of “person” goes much further, too; whereas in the Star Trek universe, people are still struggling with the idea that a sentient android might be a person, in the Culture, personhood theory (something else about which I plan to write) is the bedrock upon which all other moral and ethical systems are built. Many of the Culture’s citizens are drones or Minds–non-biological computers, of a sort, that range from about as smart as a human to millions of times smarter. Calling them “computers” really is an injustice; it’s about on par with calling a modern supercomputer a string of counting beads. Spacecraft and orbitals are controlled by vast Minds far in advance of unaugmented human intellect.

I had a dream, a while ago, about the Enterprise from Star Trek encountering a General Systems Vehicle, and the hilarity that ensued when they spoke to each other: “Why, hello, Captain Kirk of the Enterprise! I am the GSV Total Internal Reflection of the Culture. You came here in that? How…remarkably courageous of you!”

And speaking of humans…

The biological people in the Culture are the products of advanced technology just as much as the Minds are. They have been altered in many ways; their immune systems are far more resilient, they have much greater conscious control over their bodies; they have almost unlimited life expectancies; they are almost entirely free of disease and aging. Against this backdrop, the stories of the Culture take place.

Banks has written a quick overview of the Culture, and its technological and moral roots, here. A lot of the Culture novels are, in a sense, morality plays; Banks uses the idea of a post-scarcity society to examine everything from bioethics to social structures to moral values.


In the Culture novel, much of the society is depicted as pretty Utopian. Why wouldn’t it be? There’s no scarcity, no starvation, no lack of resources or space. Because of that, there’s little need for conflict; there’s neither land nor resources to fight over. There’s very little need for struggle of any kind; anyone who wants nothing but idle luxury can have it.

For that reason, most of the Culture novels concern themselves with Contact, that part of the Culture which is involved with alien, non-Culture civilizations; and especially with Special Circumstances, that part of Contact whose dealings with other civilizations extends into the realm of covert manipulation, subterfuge, and dirty tricks.

Of which there are many, as the Culture isn’t the only technologically sophisticated player on the scene.

But I wonder…would a post-scarcity society necessarily be Utopian?

Banks makes a case, and I think a good one, for the notion that a society’s moral values depend to a great extent on its wealth and the difficulty, or lack thereof, of its existence. Certainly, there are parallels in human history. I have heard it argued, for example, that societies from harsh desert climates produce harsh moral codes, which is why we see commandments in Leviticus detailing at great length and with an almost maniacal glee who to stone, when to stone them, and where to splash their blood after you’ve stoned them. As societies become more civil more wealthy, as every day becomes less of a struggle to survive, those moral values soften. Today, even the most die-hard of evangelical “execute all the gays” Biblical literalist rarely speaks out in favor of stoning women who are not virgins on their wedding night, or executing people for picking up a bundle of sticks on the Sabbath, or dealing with the crime of rape by putting to death both the rapist and the victim.

I’ve even seen it argued that as civilizations become more prosperous, their moral values must become less harsh. In a small nomadic desert tribe, someone who isn’t a team player threatens the lives of the entire tribe. In a large, complex, pluralistic society, someone who is too xenophobic, too zealous in his desire to kill anyone not like himself, threatens the peace, prosperity, and economic competitiveness of the society. The United States might be something of an aberration in this regard, as we are both the wealthiest and also the most totalitarian of the Western countries, but in the overall scope of human history we’re still remarkably progressive. (We are becoming less so, turning more xenophobic and rabidly religious as our economic and military power wane; I’m not sure that the one is directly the cause of the other but those two things definitely seem to be related.)

In the Culture novels, Banks imagines this trend as a straight line going onward; as societies become post-scarcity, they tend to become tolerant, peaceful, and Utopian to an extreme that we would find almost incomprehensible, Special Circumstances aside. There are tiny microsocieties within the Culture that are harsh and murderously intolerant, such as the Eaters in the novel Consider Phlebas, but they are also not post-scarcity; the Eaters have created a tiny society in which they have very little and every day is a struggle for survival.


We don’t have any models of post-scarcity societies to look at, so it’s hard to do anything beyond conjecture. But we do have examples of societies that had little in the way of competition, that had rich resources and no aggressive neighbors to contend with, and had very high standards of living for the time in which they existed that included lots of leisure time and few immediate threats to their survival.

One such society might be the Aztec empire, which spread through the central parts of modern-day Mexico during the 14th century. The Aztecs were technologically sophisticated and built a sprawling empire based on a combination of trade, military might, and tribute.

Because they required conquered people to pay vast sums of tribute, the Aztecs themselves were wealthy and comfortable. Though they were not industrialized, they lacked for little. Even commoners had what was for the time a high standard of living.

And yet, they were about the furthest thing from Utopian it’s possible to imagine.

The religious traditions of the Aztecs were bloodthirsty in the extreme. So voracious was their appetite for human sacrifices that they would sometimes conquer neighbors just to capture a steady stream of sacrificial victims. Commoners could make money by selling their daughters for sacrifice. Aztec records document tens of thousands of sacrifices just for the dedication of a single temple.

So they wanted for little, had no external threats, had a safe and secure civilization with a stable, thriving economy…and they turned monstrous, with a contempt for human life and a complete disregard for human value that would have made Pol Pot blush. Clearly, complex, secure, stable societies don’t always move toward moral systems that value human life, tolerate diversity, and promote individual dignity and autonomy. In fact, the Aztecs, as they became stronger, more secure, and more stable, seemed to become more bloodthirsty, not less. So why is that? What does that say about hypothetical societies that really are post-scarcity?

One possibility is that where there is no conflict, people feel a need to create it. The Aztecs fought ritual wars, called “flower wars,” with some of their neighbors–wars not over resources or land, but whose purpose was to supply humans for sacrifice.

Now, flower wars might have had a prosaic function not directly connected with religious human sacrifice, of course. Many societies use warfare as a means of disposing of populations of surplus men, who can otherwise lead to social and political unrest. In a civilization that has virtually unlimited space, that’s not a problem; in societies which are geographically bounded, it is. (Even for modern, industrialized nations.)

Still, religion unquestionably played a part. The Aztecs were bloodthirsty at least to some degree because they practiced a bloodthirsty religion, and vice versa. This, I think, indicates that a society’s moral values don’t spring entirely from what is most conducive to that society’s survival. While the things that a society must do in order to survive, and the factors that are most valuable to a society’s functioning at whatever level it finds itself, will affect that society’s religious beliefs (and those beliefs will change to some extent as the needs of the society change), there would seem to be at least some corner of a society’s moral structures that are entirely irrational and completely divorced from what would best serve that society. The Aztecs may be an extreme example of this.

So what does that mean to a post-scarcity society?

It means that a post-scarcity society, even though it has no need of war or conflict, may still have both war and conflict, despite the fact that they serve no rational role. There is no guarantee that a post-scarcity society necessarily must be a rationalist society; while reaching the point of post scarcity does require rationality, at least in the scientific and technological arts, there’s not necessarily any compelling reason to assume that a society that has reached that point must stay rational.

And a post=scarcity society that enshrines irrational beliefs, and has contempt for the value of human life, would be a very scary thing indeed. Imagine a society of limitless wealth and technological prowess that has a morality based on a literalistic interpretation of Leviticus, for instance, in which women really are stoned to death if they aren’t virgins on their wedding night. There wouldn’t necessarily be any compelling reason for a post-scarcity society not to adopt such beliefs; after all, human beings are a renewable resource too, so it would cost the society little to treat its members with indifference.

As much as I love the Culture (and the idea of post-scarcity society in general), I don’t think it’s a given that they would be Utopian.

Perhaps as we continue to advance technologically, we will continue to domesticate ourselves, so that the idea of being pointlessly cruel and warlike would seem quite horrifying to our descendants who reach that point. But if I were asked to make a bet on it, I’m not entirely sure which way I’d bet.

Women’s rights and GLBT rights are human rights

Note: I’ve started posting most of my writings about sex, culture, and society over to the Promiscuity Keepers Web site. The most recent post is an essay about why I, as a cisgendered straight man, care about the political assault on women’s rights and GLBT rights. Here’s a teaser:

Before I get started, though, let me say this: I am a white, cisgendered heterosexual man. That puts me in a uniquely privileged position; since I will never be pregnant, the assault on women’s right to choose doesn’t affect me directly. Since I am straight, the assault on the rights of gays and lesbians doesn’t affect me directly. Since I am a man, I am almost never the target of slut-shaming. I am, in other words, not the target of the campaign against women and gays that’s playing out on the airwaves and in the ballot boxes all over the United States right now.

But in a way, that’s kind of the point, because even though I am not the target of the attacks on women and gays, they still very much affect me. The thing is, these are not assaults on women’s rights or gay and lesbian rights; they are assaults on human rights. I am not gay and I am not a woman, but I am a human being. It would be a mistake for me to think that these things don’t affect me directly.

Let’s look at contraception. The debate over whether or not women should have easy access to contraception has turned into one of the defining issues in the current political discussion. Last October, presidential candidate Rick Santorum said “One of the things I will talk about, that no president has talked about before, is I think the dangers of contraception in this country. Many of the Christian faith have said, well, that’s OK; contraception is OK. It’s not OK. It’s a license to do things in a sexual realm that is counter to how things are supposed to be.” …

Want to see more? Read the whole post here!