Some thoughts on post-scarcity societies

One of my favorite writers at the moment is Iain M. Banks. Under that name, he writes science fiction set in a post-scarcity society called the Culture, where he deals with political intrigue and moral issues and technology and society on a scale that almost nobody else has ever tried. (In fact, his novel Use of Weapons is my all-time favorite book, and I’ve written about it at great length here.) Under the name Iain Banks, he writes grim and often depressing novels not related to science fiction, and wins lots of awards.

The Culture novels are interesting to me because they are imagination writ large. Conventional science fiction, whether it’s the cyberpunk dystopia of William Gibson or the bland, banal sterility of (God help us) Star Trek, imagines a world that’s quite recognizable to us….or at least to those of us who are white 20th-century Westerners. (It’s always bugged me that the alien races in Star Trek are not really very alien at all; they are more like conventional middle-class white Americans than even, say, Japanese society is, and way less alien than the Serra do Sol tribe of the Amazon basin.) They imagine a future that’s pretty much the same as the present, only more so; “Bones” McCoy, a physician, talks about how death at the ripe old age of 80 is part of Nature’s plan, as he rides around in a spaceship made by welding plates of steel together.


Image from Wikimedia Commons by Hill – Giuseppe Gerbino

In the Culture, by way of contrast, everything is made by atomic-level nanotech assembly processes. Macroengineering exists on a huge scale, so huge that the majority of the Culture’s citizens by far live on orbitals–artificially constructed habitats encircling a star. (One could live on a planet, of course, in much the way that a modern person could live in a cave if she wanted to; but why?) The largest spacecraft, General Systems Vehicles, have populations that range from the tens of millions ot six billion or more. Virtually limitless sources of energy (something I’m panning to blog about later) and virtually unlimited technical ability to make just about anything from raw atoms means that there is no such thing as scarcity; whatever any person needs, that person can have, immediately and for free. And the definition of “person” goes much further, too; whereas in the Star Trek universe, people are still struggling with the idea that a sentient android might be a person, in the Culture, personhood theory (something else about which I plan to write) is the bedrock upon which all other moral and ethical systems are built. Many of the Culture’s citizens are drones or Minds–non-biological computers, of a sort, that range from about as smart as a human to millions of times smarter. Calling them “computers” really is an injustice; it’s about on par with calling a modern supercomputer a string of counting beads. Spacecraft and orbitals are controlled by vast Minds far in advance of unaugmented human intellect.

I had a dream, a while ago, about the Enterprise from Star Trek encountering a General Systems Vehicle, and the hilarity that ensued when they spoke to each other: “Why, hello, Captain Kirk of the Enterprise! I am the GSV Total Internal Reflection of the Culture. You came here in that? How…remarkably courageous of you!”

And speaking of humans…

The biological people in the Culture are the products of advanced technology just as much as the Minds are. They have been altered in many ways; their immune systems are far more resilient, they have much greater conscious control over their bodies; they have almost unlimited life expectancies; they are almost entirely free of disease and aging. Against this backdrop, the stories of the Culture take place.

Banks has written a quick overview of the Culture, and its technological and moral roots, here. A lot of the Culture novels are, in a sense, morality plays; Banks uses the idea of a post-scarcity society to examine everything from bioethics to social structures to moral values.


In the Culture novel, much of the society is depicted as pretty Utopian. Why wouldn’t it be? There’s no scarcity, no starvation, no lack of resources or space. Because of that, there’s little need for conflict; there’s neither land nor resources to fight over. There’s very little need for struggle of any kind; anyone who wants nothing but idle luxury can have it.

For that reason, most of the Culture novels concern themselves with Contact, that part of the Culture which is involved with alien, non-Culture civilizations; and especially with Special Circumstances, that part of Contact whose dealings with other civilizations extends into the realm of covert manipulation, subterfuge, and dirty tricks.

Of which there are many, as the Culture isn’t the only technologically sophisticated player on the scene.

But I wonder…would a post-scarcity society necessarily be Utopian?

Banks makes a case, and I think a good one, for the notion that a society’s moral values depend to a great extent on its wealth and the difficulty, or lack thereof, of its existence. Certainly, there are parallels in human history. I have heard it argued, for example, that societies from harsh desert climates produce harsh moral codes, which is why we see commandments in Leviticus detailing at great length and with an almost maniacal glee who to stone, when to stone them, and where to splash their blood after you’ve stoned them. As societies become more civil more wealthy, as every day becomes less of a struggle to survive, those moral values soften. Today, even the most die-hard of evangelical “execute all the gays” Biblical literalist rarely speaks out in favor of stoning women who are not virgins on their wedding night, or executing people for picking up a bundle of sticks on the Sabbath, or dealing with the crime of rape by putting to death both the rapist and the victim.

I’ve even seen it argued that as civilizations become more prosperous, their moral values must become less harsh. In a small nomadic desert tribe, someone who isn’t a team player threatens the lives of the entire tribe. In a large, complex, pluralistic society, someone who is too xenophobic, too zealous in his desire to kill anyone not like himself, threatens the peace, prosperity, and economic competitiveness of the society. The United States might be something of an aberration in this regard, as we are both the wealthiest and also the most totalitarian of the Western countries, but in the overall scope of human history we’re still remarkably progressive. (We are becoming less so, turning more xenophobic and rabidly religious as our economic and military power wane; I’m not sure that the one is directly the cause of the other but those two things definitely seem to be related.)

In the Culture novels, Banks imagines this trend as a straight line going onward; as societies become post-scarcity, they tend to become tolerant, peaceful, and Utopian to an extreme that we would find almost incomprehensible, Special Circumstances aside. There are tiny microsocieties within the Culture that are harsh and murderously intolerant, such as the Eaters in the novel Consider Phlebas, but they are also not post-scarcity; the Eaters have created a tiny society in which they have very little and every day is a struggle for survival.


We don’t have any models of post-scarcity societies to look at, so it’s hard to do anything beyond conjecture. But we do have examples of societies that had little in the way of competition, that had rich resources and no aggressive neighbors to contend with, and had very high standards of living for the time in which they existed that included lots of leisure time and few immediate threats to their survival.

One such society might be the Aztec empire, which spread through the central parts of modern-day Mexico during the 14th century. The Aztecs were technologically sophisticated and built a sprawling empire based on a combination of trade, military might, and tribute.

Because they required conquered people to pay vast sums of tribute, the Aztecs themselves were wealthy and comfortable. Though they were not industrialized, they lacked for little. Even commoners had what was for the time a high standard of living.

And yet, they were about the furthest thing from Utopian it’s possible to imagine.

The religious traditions of the Aztecs were bloodthirsty in the extreme. So voracious was their appetite for human sacrifices that they would sometimes conquer neighbors just to capture a steady stream of sacrificial victims. Commoners could make money by selling their daughters for sacrifice. Aztec records document tens of thousands of sacrifices just for the dedication of a single temple.

So they wanted for little, had no external threats, had a safe and secure civilization with a stable, thriving economy…and they turned monstrous, with a contempt for human life and a complete disregard for human value that would have made Pol Pot blush. Clearly, complex, secure, stable societies don’t always move toward moral systems that value human life, tolerate diversity, and promote individual dignity and autonomy. In fact, the Aztecs, as they became stronger, more secure, and more stable, seemed to become more bloodthirsty, not less. So why is that? What does that say about hypothetical societies that really are post-scarcity?

One possibility is that where there is no conflict, people feel a need to create it. The Aztecs fought ritual wars, called “flower wars,” with some of their neighbors–wars not over resources or land, but whose purpose was to supply humans for sacrifice.

Now, flower wars might have had a prosaic function not directly connected with religious human sacrifice, of course. Many societies use warfare as a means of disposing of populations of surplus men, who can otherwise lead to social and political unrest. In a civilization that has virtually unlimited space, that’s not a problem; in societies which are geographically bounded, it is. (Even for modern, industrialized nations.)

Still, religion unquestionably played a part. The Aztecs were bloodthirsty at least to some degree because they practiced a bloodthirsty religion, and vice versa. This, I think, indicates that a society’s moral values don’t spring entirely from what is most conducive to that society’s survival. While the things that a society must do in order to survive, and the factors that are most valuable to a society’s functioning at whatever level it finds itself, will affect that society’s religious beliefs (and those beliefs will change to some extent as the needs of the society change), there would seem to be at least some corner of a society’s moral structures that are entirely irrational and completely divorced from what would best serve that society. The Aztecs may be an extreme example of this.

So what does that mean to a post-scarcity society?

It means that a post-scarcity society, even though it has no need of war or conflict, may still have both war and conflict, despite the fact that they serve no rational role. There is no guarantee that a post-scarcity society necessarily must be a rationalist society; while reaching the point of post scarcity does require rationality, at least in the scientific and technological arts, there’s not necessarily any compelling reason to assume that a society that has reached that point must stay rational.

And a post=scarcity society that enshrines irrational beliefs, and has contempt for the value of human life, would be a very scary thing indeed. Imagine a society of limitless wealth and technological prowess that has a morality based on a literalistic interpretation of Leviticus, for instance, in which women really are stoned to death if they aren’t virgins on their wedding night. There wouldn’t necessarily be any compelling reason for a post-scarcity society not to adopt such beliefs; after all, human beings are a renewable resource too, so it would cost the society little to treat its members with indifference.

As much as I love the Culture (and the idea of post-scarcity society in general), I don’t think it’s a given that they would be Utopian.

Perhaps as we continue to advance technologically, we will continue to domesticate ourselves, so that the idea of being pointlessly cruel and warlike would seem quite horrifying to our descendants who reach that point. But if I were asked to make a bet on it, I’m not entirely sure which way I’d bet.

Science: Not perfect, but just a bit better than most other systems

On another forum I read, someone made the claim that in science, politics and general human fallibility get in the way of learning the truth just as they do in all other areas of philosophical endeavor, and ended with “Science is little more or less immune to this effect.”

Which is, when it comes right down to it, totally wrong.

The entire point of using the scientific method as a means to understand the physical world is that science is, at least slightly, more immune than most other human endeavors. There are three reasons for science’s resilience when compared to other human institutions: skepticism, replicability, and peer review.

Skepticism means deliberately mistrusting your data, even if it says something you really really really really want it to say. Science works very hard to get rid of things like confirmation bias. It’s not always perfect, but at the end of the day it’s pretty damn good.

Replicability says that if something is true, it’s true for everyone, regardless of belief or political persuasion. If I measure the gravitational constant, and some guy in Iran measures the gravitational constant, if our measurements are correct they will be the same. No matter what philosophical, political, or religious differences we have.

Peer review means nothing is taken on faith. There are no holy fathers in science, no infallible popes. No matter how renown, popular, or revered a scientist is, if he’s wrong, he’s wrong. Einstein got some things wrong. So did Newton. Everyone’s work is checked. Nobody’s work is taken at face value. Everyone’s data is analyzed. Everyone’s results are scrutinized. From time to time, a scientist might try to bully his way into acceptance, sure; scientists are, after all, only human. But peer review has a way, eventually, of correcting their errors.

No human endeavor is perfect, but those built-in checks do mean that science tends to be self-correcting to a degree that most other human endeavors are not.

It is this fundamental attribute of the scientific method–its self-correcting process–that is the single most valuable thing about it. The scientific method does not guarantee happiness or justice or peace or validation. It does not guarantee that the results it offers will be what we expect them to be, or even that they will be comprehensible to us; the more we learn about the laws of nature on a very small and a very large scale, the stranger they seem to our intuition. It offers only one thing: the ability to model the physical world in a way that is consistent with observable reality.

But that one thing it does, it does very, very well indeed.

When we are young

When we are young, we imagine dragons and elves, magic and wizards, heroes swooping down on flying carpets to save the day. As we grow, we long to see these things. We long to catch a glimpse of a dragon soaring over the mountains at sunset, to see with our own eyes the magic of the elves.

We are told that there is this thing called “science,” and science takes away magic. Science says there are no wizards, no elves, no magic carpet rides, no dragons spreading their wings in the last rays of the sun. And it hurts.

For many, the impulse is to reject this thing called “science,” this destroyer of dreams, so that we can live, if even only a little bit, in the world of magic and make-believe.

But for those who do not do this, for those who want to see the world for what it is, science offers us more than our imaginations. Instead of dragons and elves, instead of wizards and magic, we are offered a universe that is ancient and huge and strange beyond our dreams. We are offered a place where galaxies gigantic beyond our comprehension collide in ferocious cataclysms of creation and destruction, where strange objects that can never be seen tear holes through the fabric of space and time, where tiny things flit around and appear in two places at once. We are offered magnificent weirdness far stranger than the paltry ordinariness of wizards and dragons–for what are wizards but men with a litany of parlor tricks, and what are dragons but flying dinosaurs with matches?

Some who reject science still see, however vaguely, the faint glimmers of the wonder that it offers, and so they seek to appropriate its fancy words to fuel their imaginings of dragons and elves. “Quantum!” they cry. “Quantum thus-and-such, which means magic is real! We make the world just by looking at it; we are rightfully the kings of creation!”

And when told that their crude and fuzzy grasp of this hateful thing called “science,” this shatterer of dreams that comes in the light of day to steal their dragons away, says no such things, but actually something else, they react with derision, and scorn, and contempt. “Science,” they say, “is just opinion. It is religion, full of popes and magistrates who declare reality to be what they want, and not what I want.”

For them, I feel sad. In their desire to wrap themselves up in the imaginations of youth, they turn their backs on things far more fantastic than they can dream.

I love science. It does not steal magic away from us; it shows us magic far more awesome than we could ever otherwise know.

Science Literacy: Of Pickles and Probability

STUDY PROVES THAT PLACING A PICKLE ON YOUR NOSE GRANTS PSYCHIC POWERS

For immediate release: Scientists at the Min Planck Institute announced today that placing a pickle on your nose can improve telekinetic ability.

According to the researchers, they performed a study in which a volunteer was asked to place a pickle on her nose and then flip a coin to see whether or not the pickle would help her flip heads. The volunteer flipped the coin, which came up heads.

“This is a crowning achievement for our research,” the study’s authors said. “Our results show that having a pickle on your nose allows you to determine the outcome of a coin-toss.”

Let’s say you’re browsing the Internet one day, and you come across this report. Now, you’d probably think that there was something hinkey about this experiment, right? We know intuitively that the odds of a coin toss coming up heads are about 50/50, so if someone puts a pickle on her nose and flips a coin, that doesn’t actually prove a damn thing. But we might not know exactly how that applies to studies that don’t involve flipping coins.


So let’s talk about our friend p. This is p.

p represents the probability that a scientific study’s results are total bunk. Formally, it’s the probability that results like the ones observed could occur even if the null hypothesis is true. In English, that basically means that it represents how likely it is to get these results even if whatever the study is trying to show doesn’t actually exist at all, and so the study’s results don’t mean a damn thing.

Every experiment (or at least every experiment seeking to show a relationship between things) has a p value. In the nose-pickle experiment, the p value is 0.5, which means there is a 50% chance that the subject would flip heads even if there’s no connection between the pickle on her nose and the results of the experiment.

There’s a p value associated with any experiment. For example if someone wanted to show that watching Richard Simmons on television caused birth defects, he might take two groups of pregnant ring-tailed lemurs and put them in front of two different TV sets, one of them showing Richard Simmons reruns and one of them showing reruns of Law & Order, to see if any of the lemurs had pups that were missing legs or had eyes in unlikely places or something.

But here’s the thing. There’s always a chance that a lemur pup will be born with a birth defect. It happens randomly.

So if one of the lemurs watching Richard Simmons had a pup with two tails, and the other group of lemurs had normal pups, that wouldn’t necessarily mean that watching Mr. Simmons caused birth defects. The p value of this experiment is related to the probability that one out of however many lemurs you have will randomly have a pup with a birth defect. As the number of lemurs gets bigger, the probability of one of them having a weird pup gets bigger. The experiment needs to account for that, and the researchers who interpret the results need to factor that into the analysis.


If you want to be able to evaluate whether or not some study that supposedly shows something or other is rubbish, you need to think about p. Most of the time, p is expressed as a “less than or equal to” thing, as in “This study’s p value is <= 0.005″. That means “We don’t know exactly what the p value is, but we know it can’t be greater than one half of one percent.”

A p value of 0.005 is pretty good; it means there’s a 0.5% chance that the study is rubbish. Obviously, the larger the p value, the more skeptical you should be of a study. A p value of 0.5, like with our pickle experiment, shows that the experiment is pretty much worthless.

There are a lot of ways to make an experiment’s p value smaller. With the pickle experiment, we could simply do more than one trial. As the number of coin tosses goes up, the odds of a particular result go down. If our subject flips a coin twice, the odds of getting a heads twice in a row are 1 in 4, which gives us a p value of 0.25–still high enough that any reasonable person would call rubbish on a positive trial. More coin tosses still give successively smaller p values; the p value of our simple experiment is given (roughly) by 1/2n, where n is the number of times we flip the coin.


There’s more than just the p value to consider when evaluating a scientific study, of course. The study still needs to be properly constructed and controlled. Proper control groups are important for eliminating confirmation bias, which is a very powerful tendency for human beings to see what they expect to see and to remember evidence that supports their preconceptions while forgetting evidence which does not. And, naturally, the methodology has to be carefully implemented too. A lot goes into making a good experiment.

And even if the experiment is good, there’s more to deciding whether or not its conclusions are valid than looking at its p value. Most experiments are considered pretty good if they have a p value of .005, which means there’s a 1 in 200 chance that the results could be attributed to pure random chance.

That sounds like it’s a fairly good certainty, but consider this: That’s about the same as the odds of flipping heads on a coin 8 times in a row.

Now, if you were to flip a coin eight times, you’d probably be surprised if it landed on heads every single time.

But, if you were to flip a coin eight thousand times, it would be surprising if you didn’t get eight heads in a row somewhere in there.


One of the hallmarks of science is replicability. If something is true, it should be true no matter how many people run the experiment. Whenever an experiment is done, it’s never taken as gospel until other people also do it. (Well, to be fair, it’s never taken as gospel period; any scientific observation is only as good as the next data.)

So that means that experiments get repeated a lot. And when you do something a lot, sometimes, statistical anomalies come in. If you flip a coin enough times, you’re going to get eight heads in a row, sooner or later. If you do an experiment enough times, you’re going to get weird results, sooner or later.

So a low p value doesn’t necessarily mean that the results of an experiment are valid. In order to figure out if they’re valid or not, you need to replicate the experiment, and you need to look at ALL the results of ALL the trials. And if you see something weird, you need to be able to answer the question “Is this weird because something weird is actually going on, or is this weird because if you toss a coin enough times you’ll sometimes see weird runs?”

That’s where something called Bayesian analysis comes in handy.

I’m not going to get too much into it, because Bayesian analysis could easily make a post (or a book) of its own. In this context, the purpose of Bayesian analysis is to ask the question “Given the probability of something, and given how many times I’ve seen it, could what I’m seeing can be put down to random chance without actually meaning squat?”

For example, if you flip a coin 50 times and you get a mix of 30 heads and 20 tails, Bayesian analysis can answer the question “Is this just a random statistical fluke, or is this coin weighted?”

When you evaluate a scientific study or a clinical trial, you can’t just take a single experiment in isolation, look at its p value, and decide that the results must be true. You also have to look at other similar experiments, examine their results, and see whether or not what you’re looking at is just a random artifact.


I ran into a real-world example of how this can fuck you up a bit ago, where someone on a forum I belong to posted a link to an experiment that purports to show that feeding genetically modified corn to mice will cause health problems in their offspring. The results were (and still are) all over the Internet; fear of genetically modified food is quite rampant among some folks, especially on the political left.

The experiment had a p value of <= .005, meaning that if the null hypothesis is true (that is, there is no link between genetically modified corn and the health of mice), we could expect to see this result about one time in 200.

So it sounds like the result is pretty trustworthy…until you consider that literally thousands of similar experiments have been done, and they have shown no connection between genetically modified corn and ill health in test mice.

If an experiment’s p value is .005, and you do the experiment a thousand times, it’s not unexpected that you’d get 5 or 6 “positive” results even if the null hypothesis is true. This is part of the reason that replicability is important to science–no matter how low your p value may be, the results of a single experiment can never be conclusive.

Liberals and Conservatives: Living Together in Fear

In April of this year, a report appeared in the scientific journal Cell which claimed that there are significant quantifiable neurological differences in the brains of liberals and conservatives.

Specifically, the report shows that political conservatives have larger amygdalas, which mediate emotional reactions such as fear and aggression.

This report got picked up all over the mainstream press, as with this article in The Atlantic headlined Are Liberals and Conservatives Hard-Wired to Disagree? and another article over on Raw Story titled Brain structure differs in liberals, conservatives: study, which says “Liberals have more gray matter in a part of the brain associated with understanding complexity, while the conservative brain is bigger in the section related to processing fear, said the study on Thursday in Current Biology.”

From a purely sociological standpoint, this may have some element of truth, at least in the sense that repeated sociological studies have shown conservatives to be motivated by fears of collapsing social order, loss of social hierarchy, and social disorder.

But qualifying conservatives as fearful and liberals as optimistic is really kind of silly, seems to me. Liberals, in my experience, are just as likely to be driven by irrational fears, and to make decisions based on poor evaluation of those fears, as conservatives are.

Take, for example, the nearly universal fear among those on the political left of nuclear power. Despite the fact that nuclear power is by far the safest form of large-scale electrical generation yet invented (coal power kills more human beings every year, primarily from air pollution but also from coal mining accidents, than nuclear power has killed in the entire history of its use combined–including Chernobyl), liberals are nearly universal in their stark raving terror of all things “nuclear.”

Liberals like to mock conservatives as ignorant, uninformed, and anti-intellectual, but the reality is that across the United States, anti-intellectualism is extraordinarily popular; its cause is championed by people of all political stripes. It manifests differently, sure; conservatives tend to oppose pure science, particularly biological and geological science (but even physics is not immune; there are some highly vocal nutjobs on the right who claim that Einstein’s theory of relativity is a sinful attempt to undermine public morality by embracing moral relativism), though quixotically they tend to embrace technology.

Liberals, on the other hand, claim to champion science, at least when they can be arsed to learn enough to be able to separate it from pseudoscience; but they reject technology, in forms ranging from vaccination to food processing. Liberals are particularly frightened of life sciences; their terror of genetically modified food is second only to their terror of nuclear power as a common source of fear.

I’ve been chewing on this for a while, and as I often do, I’ve made a chart.

The things that will actually kill you tend, by and large, not to be the things you’re afraid of. Conservatives fear terrorism, which is stunningly unlikely to kill you; the number of Americans who lose their lives to terrorists every year is roughly on par with the number killed by sharks and bears, and is dwarfed by the number of people killed by falling off stepladders. On the other hand, as small as this number is, it’s still mountains bigger than the number of Americans killed by nuclear power every year, which tends to hover year after year at somewhere around zero.

How ironic, then, that the billions spent fighting these fears and the work done on both sides to advocate for these fears, and it’s actually driving to the office or not getting away from the TV to exercise that will do you in.

Transhumanism, Technology, and the da Vinci Effect

[Note: There is a followup to this essay here]

Ray Kurzweil pisses me off.

His name came up last night at Science Pub, which is a regular event, hosted by a friend of mine, that brings in guest speakers on a wide range of different science and technology related topics to talk in front of an audience at a large pub. There’s beer and pizza and really smart scientists talking about things they’re really passionate about, and if you live in Portland, Oregon (or Eugene or Hillsboro; my friend is branching out), I can’t recommend them enough.

Before I can talk about why Ray Kurzweil pisses me off–or, more precisely, before I can talk about some of the reasons Ray Kurzweil pisses me off, as an exhaustive list would most surely strain my patience to write and your patience to read–it is first necessary to talk about what I call the “da Vinci effect.”


Leonardo da Vinci is, in my opinion, one of the greatest human beings who has ever lived. He embodies the best in our desire to learn; he was interested in painting and sculpture and anatomy and engineering and just about every other thing worth knowing about, and he took time off of creating some of the most incredible works of art the human species has yet created to invent the helicopter, the armored personnel carrier, the barrel spring, the Gatling gun, and the automated artillery fuze…pausing along the way to record innovations in geography, hydraulics, music, and a whole lot of other stuff.

However, most of his inventions, while sound in principle, were crippled by the fact that he could not conceive of any power source other than muscle power. The steam engine was still more than two and a half centuries away; the internal combustion engine, another half-century or so after that.

da Vinci had the ability to anticipate the broad outlines of some really amazing things, but he could not build them, because he lacked one essential element whose design and operation were way beyond him or the society he lived in, both in theory and in practice.

I tend to call this the “da Vinci effect”–the ability to see how something might be possible, but to be missing one key component that’s so far ahead of the technology of the day that it’s not possible even to hypothesize, except perhaps in broad, general terms, how it might work, and not possible even to anticipate with any kind of accuracy how long it might take before the thing becomes reachable.


Charles Babbage’s Difference Engine is another example of an idea whose realization was held back by the da Vinci effect.

Babbage reasoned–quite accurately–that it was possible to build a machine capable of mathematical computation. He also reasoned that it would be possible to construct such a machine in such a way that it could be fed a program–a sequence of logical steps, each representing some operation to carry out–and that on the conclusion of such a program, the machine would have solved a problem. Ths last bit differentiated his conception of a computational engine from other devices (such as the Antikythera mechanism) which were built to solve one particular problem and could not be programmed.

The technology of the time, specifically with respect to precision metal casting, meant his design for a mechanical computer was never realized in his lifetime. Today, we use devices that operate by principles he imagined every day, but they aren’t mechanical; in place of gears and levers, they use gates that control the flow of electrons–something he could never have envisioned given the understanding of his time.


One of the speakers at last night’s Science Pub was Dr. Larry Sherman, a neurobiologist and musician who runs a research lab here in Oregon that’s currently doing a lot of cutting-edge work in neurobiology. He’s one of my heroes1; I’ve seen him present several times now, and he’s a fantastic speaker.

Now, when I was in school studying neurobiology, things were very simple. You had two kinds of cells in your brain: neurons, which did all the heavy lifting involved in the process of cognition and behavior, and glial cells, which provided support for the neurons, nourished them, repaired damage, and cleaned up the debris from injury or dead cells.

There are a couple of broad classifications for glial cells: astrocytes and microglia. Astrocytes, shown in green in this picture, provide a physical scaffold to hold neurons (in blue) in place. They wrap the axons of neurons in protective sheaths and they absorb nutrients and oxygen from blood vessels, which they then pass on to the neurons. Microglia are cells that are kind of like little amoebas; hey swim around in your brain locating dead or dying cells, pathogens, and other forms of debris, and eating them.

So that’s the background.


Ray Kurzweil is a self-styled “futurist,” transhumanist, and author. He’s also a Pollyanna with little real rubbber-on-road understanding of the challenges that nanotechnology and biotechnology face. He talks a great deal about AI, human/machine interface, and uploading–the process of modeling a brain in a computer such that the computer is conscious and aware, with all the knowledge and personality of the person being modeled.

He gets a lot of it wrong, but it’s the last bit he gets really wrong. Not the general outlines, mind you, but certainly the timetable. He’s the guy who looks at da Vinci’s notebook and says “Wow, a flying machine? That’s awesome! Look how detailed these drawings are. I bet we could build one of these by next spring!”

Anyway, his name came up during the Q&A at Science Pub, and I kind of groaned. Not as much as I did when Dr. Sherman suggested that a person whose neurons had been replaced with mechanical analogues wouldn’t be a person any more, but I groaned nonetheless.

Afterward, I had a chance to talk to Dr. Sherman briefly. The conversation was short; only just long enough for him to completely blow my mind, make me believe that a lot of ideas about uploading are limited by the da Vinci effect, and to suggest that much brain modeling research currently going on is (in his words) “totally wrong”.


It turns out that most of what I was taught about neurobiology was utterly wrong. Our understanding of the brain has exploded in the last few decades. We’ve learned that people can and do grow new brain cells all the time, throughout their lives. And we’ve learned that the glial cells do a whole lot more than we thought they did.

Astrocytes, long believed to be nothing but scaffolding and cafeteria workers, are strongly implicated in learning and cognition, as it turns out. They not only support the neurons in your brain, but they guide the process of new neural connections, the process by which memory and learning work. They promote the growth of new neural pathways, and they also determine (at least to some degree) how and where those new pathways form.

In fact, human beings have more different types of astrocytes than other vertebrates do. Apparently, according to my brief conversation with Dr. Sherman, researchers have taken human astrocytes and implanted them in developing mice, and discovered an apparent increase in cognitive functions of those mice even though the neurons themselves were no different.

And, more recently, it turns out that microglia–the garbage collectors and scavengers of the brain–can influence high-order behavior as well.

The last bit is really important, and it involves hox genes.


A quick overview of hox genes. These are genes which control the expression of other genes, and which are involved in determining how an organism’s body develops. You (and monkeys and mice and fruit flies and earthworms) have hox genes–pretty much the same hox genes, in fact–that represent an overall “body image plan”. The do things like say “Ah, this bit will become a torso, so I will switch on the genes that correspond to forming arms and legs here, and switch off the genes responsible for making eyeballs or toes.” Or “This bit is the head, so I will switch on the eyeball-forming genes and the mouth-forming genes, and switch off the leg-forming genes.”

Mutations to hox genes generally cause gross physical abnormalities. In fruit flies, incoreect hox gene expression can cause the fly to sprout legs instead of antennae, or to grow wings from strange parts of its body. In humans, hox gene malfunctions can cause a number of really bizarre and usually fatal birth defects–growing tiny limbs out of eye sockets, that sort of thing.

And it appears that a hox gene mutation can result in obsessive-compulsive disorder.

And more bizarrely than that, this hox gene mutation affects the way microglia form.


Think about how bizarre that is for a minute. The genes responsible for regulating overall body plan can cause changes in microglia–little amoeba scavengers that roam around in the brain. And that change to those scavengers can result in gross high-level behavioral differences.

Not only are we not in Kansas any more, we’re not even on the same continent. This is absolutely not what anyone would expect, given our knowledge of the brain even twenty years ago.

Which brings us back ’round to da Vinci.


Right now, most attempts to model the brain look only at the neurons, and disregard the glial cells. Now, there’s value to this. The brain is really (really really really) complex, and just developing tools able to model billions of cells and hundreds or thousands of billions of interconnections is really, really hard. We’re laying the foundation, even with simple models, that lets us construct the computational and informatics tools for handling a problem of mind-boggling scope.

But there’s still a critical bit missing. Or critical bits, really. We’re missing the computational bits that would allow us to model a system of this size and scope, or even to be able to map out such a system for the purpose of modeling it. A lot of folks blithely assume Moore’s Law will take care of that for us, but I’m not so sure. Even assuming a computer of infinite power and capability, if you want to upload a person, you still have the task of being able to read the states and connection pathways of many billions of very small cells, and I’m not convinced we even know quite what those tools look like yet.

But on top of that, when you consider that we’re missing a big part of the picture of how cognition happens–we’re looking at only one part of the system, and the mechanism by which glial cells promote, regulate, and influence high-level cognitive tasks is astonishingly poorly understood–it becomes clear (at least to me, anyway) that uploading is something that isn’t going to happen soon.

We can, like da Vinci, sketch out the principles by which it might work. There is nothing in the laws of physics that suggest it can’t be done, and in fact I believe that it absolutely can and will, eventually, be done.

But the more I look at the problem, the more it seems to me that there’s a key bit missing. And I don’t even think we’re in a position yet to figure out what that key bit looks like, much less how it can be built. It may be possible that when we do model brains, the model isn’t going to look anything like what we think of as a conventional computer at all, much like when we built general-purpose programmable devices, they didn’t look like Babbage’s difference engines at all.


1 Or would be, if it weren’t for the fact that he rejects personhood theory, which is something I’m still a bit surprised by. If I ever have the opportunity to talk with him over dinner, I want to discuss personhood theory with him, oh yes.

Linky-Links: Sex, Polyamory, Tech, and Humor edition

It’s time for another massive collection of links, so I can close some of my browser windows and reclaim a whole bunch of RAM on this computer. Today’s list is heavy on sex, tech, and humor, making it different from any other linky-links post in exactly zero ways, I suppose.

Sex

From New Scientist magazine, we have the article Sex on the brain: Orgasms unlock altered consciousness. It discusses fMRI scans of a volunteer who masturbated to orgasm inside an fMRI scanner while the experimenters recorded her brain activity. If I had the budget, this is the sort of science I’d be doing.

The Sexacademic blog gives us a story titled Explaining Porn Watching With Science!, which talks about the neurochemical pathways active during porn watching, and along the way debunks some lurid, sensationalistic pop culture ideas about “sex addiction”.

On Sexonomics is an article Porn by the Numbers 5: On feminist porn. The myth that porn, or “mainstream” porn (whatever that is), never shows women in a positive light and is never aimed at a female audience is as enduring as the legend of Bigfoot. I was recently at a Science Pub, in fact, in which an otherwise sex-positive sociologist decried the portrayal of women in “mainstream” porn. The argument became neatly circular later when she said that “mainstream” porn is that which portrays women negatively. The fact that someone with a doctorate in sociology can think about something in such an intellectually sloppy way testifies, I think, to how emotional the subject of porn (and especially feminist porn) is.


Society and rape

Speaking of feminist issues, some time ago a prominent female blogger was approached by a stranger in an elevator at a convention. Said stranger asked her to go back to his room with him. She blogged about the incident and why it was inappropriate, and provoked a firestorm that many of you Gentle Readers are probably aware of. Her thesis is pretty simple: Lots of women are sexually assaulted; if you want a positive response from women, don’t approach them in ways that would make sexual assault easy.

A lot of men–including some men that I know personally and otherwise find to be basically reasonable people–flipped out about that, and started wailing nonsense like “Feminists think all men are raaaaaaapists!” Which is total bunk; what’s being said is that SOME men are rapists, but rapists don’t wear special T-shirts or have a secret handshake that identifies them, so if you’re being approached by some strange guy you have no way to know if he’s likely to assault you or not. That means being aware that a strange dude you meet might be willing to assault you. (The defensive, “you’re saying all men are rapists” response from a lot of guys is similar to the sort of response you see in US society when you try to talk about institutional racism; people who think “Well, I’m not a rapist” or “Well, I’m not a racist” become so reactionary when they hear what might sound like an accusation that they refuse to discuss rape or race in any sort of rational way.)

All that is a longwinded introduction to the next two links, The first, Women in Elevators: A Man To Man Talk For The Menz, talks about the reasons that women can be suspicious of being approached by strangers. Not every dog is aggressive, but nearly everyone feels some trepidation when approached by a strange dog, because there’s no easy way to tell dogs that bite from dogs that don’t. I’m sure somebody somewhere will be upset and insulted by a metaphor about dogs (“You’re saying all men are dogs!”), but if that’s the case, that dude probably can’t be educated.

And second, for the dudes who say “Well, women should just say so if they don’t want to be approached!” we have Another post about rape. This one talks about how women (and men, to be fair, though to a lesser extent) are strongly socialized not to say “no,” not to assert boundaries, and not to upset people. It is, I think, a toxic set of social values, but that’s a whole ‘nother blog post. The point is, simply asserting a boundary carries a social cost. (This is why I think the idea of affirmative consent, adding “only yes means yes” to the idea of “no means no,” is so important, as I’ve talked about before.)


Polyamory

For quite a while now, people have been bugging me to find a new home for my polyamory pages that until now have livedo n my site at www.xeromag.com. I’ve finally built a new site for them, More Than Two. I’ve blogged the new link before, but f you haven’t taken a look recently, you should. There’s now an RSS feed of new articles, and some new content has been posted.

On the Polytical blog is this excellent essay, I’m Better ‘Cos I’m Poly. Anyone who is openly out about being poly has probably at some point or another been labeled as “smug” or “arrogant” about it, most often by someone who identifies as monogamous. This essay is an excellent deconstruction of the “smug poly” stereotype.


Geek Humor

First up, we have these very funny Sci-Fi Ikea Manuals. What would happen if light sabers were real? Or the Tardis was something you could get at Ikea? What would the assembly instructions look like? Apparently, in order to put together an Ikea light saber, you must first have your hand chopped off by Darth Vader.

Our travel down the surrealist path continues with Ride the Gummi Worm, Muad’Dib, a diorama of a scene from Dune done with Gummi Bears and a gigantic Gummi Worm.


Do-It-Yourself Science!

I have blogged in the past about using and Arduino mocrocontroller board to make sex toys. For folks who think that sounds like a good idea but aren’t sure how to use or program an Arduino, there is a comic book introduction to Arduino, which you can download as a PDF. If you don’t have a background in electronics or microcontrollers but you want to build your own Arduino projects, this is a great way to get started.

Speaking of Ikea, which I was a bit earlier, for those of oyu who are photography buffs comes this guide to building a cheap time lapse panning unit using only things you can get at Ikea.

And from the Department of Mad Science So Preposterous it Just Might Work comes the story of a high school student who rigged a camera and GPS transponder to a bunch of garbage bags, filled them with helium, and let them go. This is a really cool science project done on a tiny budget and with really fun results.


Science

Over at New Scientist is this awesome article, Sky survey maps distant universe in 3D. The universe isn’t shaped like you think it is, and now a group of researchers are working on building what is by far the highest-resolution map of the physical universe yet undertaken…in 3D!

The Department of Unclear on the Concept

It’s likely that most folks reading this are aware of the Occupy Wall Street movement. It’s kind of the flip side of the American Tea Party movement;. The Tea Party is a bunch of mostly middle-class people who love and cherish the superrich and believe that the superrich, being such wonderful people and all, should be exempt from paying the same tax that the working class pays and should otherwise be given all sorts of concessions so that they can make more money. The Occupy Wall Street folks, on the other hand, embrace the heretical notion that taxes on the superrich should be increased so that the very wealthiest people are paying sixty percent of the taxes that the middle class pays, instead of fifty percent of the taxes that the middle class pays…even if it means that some of the world’s richest people might have to postpone purchasing that five-million-dollar yacht for a few weeks because of it.

I’m generally sympathetic to the Occupy Wall Street protesters, though there’s at least one of them who simply doesn’t appear to Get It…nor to have a functioning sense of irony. He argues that the mainstream media lies or distorts truth to protect the interests of the wealthy and powerful, which it arguably does…so his response is to, err, do the same thing. And when he gets called on it over on TimParkinson.net, hilarity ensues. Read the comments to get the full effect; there’s even a followup here.

A Taxonomy of Crackpot Ideas

Some time ago, when the anti-science, anti-evolution, religious literalist movie “Expelled” was making the rounds, it occurred to me that a strict 6-day, young-earth creationist idea of the world requires a particular confluence of perceptual filters in order to exist. There has to be an unquestioned acceptance of literalist religious dogma, a profound ignorance of some of the basic tenets of science, and a willingness to believe in a vast, orchestrated conspiracy on the part of all the world’s geologists, biologists, archaeologists, geneticists, and anthropologists in order for this notion to seem reasonable.

I’ve been chewing on that thought for a while, and looking at the perceptive filters that have to be in place to accept any number of implausible ideas, from moon hoaxers to lizard people conspiracy theories to anti-vaccinationism.

And, since making charts is something I do, I plotted some of these ideas in a Venn diagram that shows an overlapping set of prerequisites for a number of different flavors of nuttiness.

As usual, you can click on the image for an embiggened version.

How to Tell when Something Isn’t Science

The process of science–the systematic, evidence-based, rigorous, controlled exploration of the processes of the natural world–has produced an explosion of knowledge and understanding. Since the Italian Renaissance and the Abbasid period in the Persian empire, both of which saw enormous gains in scientific thinking and with them huge leaps in technology and understanding, science has been the beacon of light shining in the darkness of superstition and ignorance.

So it’s probably not too surprising that many folks who seek to embrace all sorts of non-scientific ideas try to claim that their ideas are science. Calling these ideas “science” gives them a stamp of validation. If an idea is scientific, that means it has greater legitimacy in many people’s minds.

And the world needs to cut that shit out. Not all ideas are science, yet everything from phrenology to metaphysics to “crystal energy” tries to clamber onto the scientific bandwagon.

Most recently, the cry of the pseudoscientist has become “Quantum mechanics says!” Folks who can’t actually define what quantum mechanics is are nevertheless eager to fill New Age bookstores with books that claim to “prove” that quantum mechanics validates their ideas.

So here’s a handy-dandy, more-than-pocket-sized guide that will help you tell what science actually is and is not. Ready? Here we go!

RULE 1: If it doesn’t make a precisely defined, testable, falsifiable claim, it is not science.

This is the first and most basic premise of this whole “science” business. If someone claims “Science shows us that” or “Quantum mechanics proves that” and the next thing out of their mouth isn’t a testable, falsifiable claim, then what they’re saying is probably bollocks.

Continue reading

Sex for Science! Chapter 3: It’s All About the Protocol

Sex for Science! Chapter 0
Sex for Science! Chapter 1
Sex for Science! Interlude
Sex for Science! Chapter 2
Sex for Science! Chapter 3
Sex for Science! Chapter 4

Our accommodations and my partner in Science’s socks properly admired, it was time for business. Err, science. The business of science. And, um, stuff.

The motel did, amongst the amenities we didn’t need (like the bullet hole), provide the amenities that we did–namely, a bed, a door, and, once the office staff had got ’round to realizing the room was occupied, electricity.

The door was problematic. It had a fist-sized hole in it, which one does not normally expect to see in doors; but it did not have a doorknob or latch, which one normally does.

Fortunately, this wasn’t the front door, but rather the door between the suite’s living room and bedroom. The bedroom did come equipped with a bed–two of them, in fact. And, unlike a certain bed in a certain room atop a curtain turret in a certain castle in the south of France, the beds in this room seemed reasonably solid and unlikely to collapse at the slightest jouncing.

Which was good, as there is a possibility that the sudden and unexpected collapse of a bed might alter a subject’s brainwave activity, resulting in erroneous data that might be difficult to interpret.

My mad scientist partner and I checked the structural integrity of the bed, to help ensure first and foremost the validity of the data we planned to collect and also, as a helpful side benefit, the safety of our experimental subjects. When one is doing mad science, safety is job…well, safety is something one considers.

She had brought a photographer with her, so while the photographer started to set up we talked experimental protocol. If you’re doing anything for Science, including sex, you can’t just sit down and get right to it; you need to establish a methodology that helps to control for confounding factors and that has a reasonable shot at providing a clear answer to a specific question.

This is the bit that a lot of people get wrong when they try to understand the world around them. Take, for example, the popular old saw “you have to hit rock bottom before you can change.” What does ‘rock bottom’ mean, anyway? Having things go bad is often a catalyst for change, sure…but if one person loses a job and changes his behavior, another person loses a relationship and changes a behavior, and a third person loses his house and family and changes a behavior, which one has hit ‘rock bottom,’ whatever that is? Until you start losing limbs, you always have further down you can go. The concept of ‘rock bottom’ is poorly defined, just because the old saying sounds somehow better that way.

But I digress.

Our objective for this particular experiment was to see whether or not the Neurosky chip could detect any pattern of brainwave activity that was typical for sexual arousal but different from other states not related to arousal. To that end, she worked out a protocol that involved taking a set of baseline readings from each person while reading silently, meditating, and reciting a memory. That done, we would then record for fifteen minutes while each victim subject’s partner sexually stimulated that person. At the end of each session, there were exit questions involving asking the test subject for a subjective assessment of level of arousal and level of nervousness (to help control for whether or not nervousness was what the EEG was recording). During all this, a note-taker would be timing the events and recording anything that could present an anomaly on the EEG, as well as observations of each subject’s behavior.


My partner in science brought one of her partners along–the mutual friend who’d introduced the two of us on Twitter and made the whole thing happen. He brought a netbook to record data and a really nifty necklace with a microphone and a bunch of LEDs that would glow and change according to the ambient sound. That bit became interesting a little later on, as it turns out.

Photo gear (for the subjects willing to be photographed), netbook, modified Mattel Mindflex, and Arduino in pace, we were ready to start.

I’ll skip over the next few hours, as it was for the most part nothing but people putting on the MindFlex, doing a bit of reading and meditating and stuff, and then lying still and being sexually stimulated in various ways. I would hate to bore you with the details. Such details are the stuff of scientific research, but when described in black and white, they tend toward the drab and tedious: “Subject number three spread her legs while her partner slowly kissed his way down her body, until at three minutes and sixteen seconds reaching her clitoris, at which point the subject began to moan and…” You get the idea. Pretty dry stuff, right?

There are a few minor points that do bear mentioning, though. The striped socks did come into play again at one point, when the photographer got this rather awesome shot:

The second was the interesting way in which the necklace I mentioned previously would react when my fellow mad scientist was screaming, which was, in my estimation, pretty damn nifty.

The third, as I mentioned in an earlier post, is that the English language has no word to describe the experience of watching a pierced, tattoed woman you’ve only just met have a huge, screaming orgasm, then pull off the electrodes for the EEG machine, roll over, and start talking about sex-based differences in brain activation during sexual arousal. Dear God.


Now, at this point I have a confession to make, which, Dear Readers, I am trusting not to impact too severely your opinion of your humble scribe. I may lose some of my street cred as a veteran, seasoned pervert, but in the interests of fulldisclosure (for Science!) there is a confession I feel I must make.

I had not, up until this point in my life, actually had an orgasm in front of people I didn’t know personally. Oh, sure, I’d been to sex parties and played in public dungeons; I mean, really, who hasn’t? But until that afternoon in that seedy motel in the industrial part of Seattle, I’d not gone that one last inch (so to speak).

That all changed, though, and opened the way to a repeat performance, of a sort, in the dungeon at Frolicon some months later…but more about that at a later time.

I was rigged up, the baseline measurements were made, the timer was started, zaiah started doing things to me, and I in fact did have some incredible screaming orgasms of my own.

Four of them, in fact. I was right on the edge of the fifth when the fifteen-minute mark rolled by, and was left shaking and frustrated right on the edge. Much, I might add, to the delight of the onlookers, who seemed perhaps less than fully engaged in sympathy for my plight.

Experiment finally over, we parted ways. The Seattle folks went back to wherever Seattle folks go when they aren’t in run-down motel suites doing impromptu brain research about sex, and the rest of us headed out to dinner.

The dinner turned into a bit of a scientific enterprise itself, during which we attempted to establish a set of parameters by which we could decide whether key lime pie was a superior dessert to New York cheesecake…since, y’know, we were in the mood for Science and all. And, as it turned out, key lime pie is indeed a superior dessert. This is the sort of surprising result that one sometimes discovers when exploring the often counterintuitive ways of the physical world.

We only shocked the server once, with a passing reference to Eiffel Towering (the sex act, not the act of visiting the French landmark). That done, it was back to the motel suite, where I fell into a deep slumber and, I’m told, missed some more sexual hijinks of some sort or another.

On the way home the next day, we made a couple of interesting discoveries, which I will detail in the next chapter.