I for one welcome our new AI overlords

I’ve been thinking a lot about machine learning lately. Take a look at these images:

Portraits of people who don't exist

These people do not exist. They’re generated by a neural net program at thispersondoesnotexist.com, a site that uses Nvidia’s StyleGAN to generate images of faces.

StyleGAN is a generative adversarial network, a neural network that was trained on hundreds of thousands of photos of faces. The network generated images of faces, which were compared with existing photos by another part of the same program (the “adversarial” part). If the matches looked good, those parts of the network were strengthened; if not, they were weakened. And so, over many iterations, its ability to create faces grew.

If you look closely at these faces, there’s something a little…off about them. They don’t look quiiiiite right, especially where clothing is concerned (look at the shoulder of the man in the upper left).

Still, that doesn’t prevent people from using fake images like these for political purposes. The “Hunter Biden story” was “broken” by a “security researcher” who does not exist, using a photo from This Person Does Not Exist, for example.

There are ways you can spot StyleGAN generated faces. For example, the people at This Person Does Not Exist found that the eyes tended to look weird, detached from the faces, so the researchers fixed the problem in a brute-force but clever way: they trained the Style GAN to put the eyes in the same place on every face, regardless of which way it was turned. Faces generated at TPDNE always have the major features in the same place: eyes the same distance apart, nose in the same place, and so on.

StyleGAN fixed facial layout

StyleGAN can also generate other types of images, as you can see on This Waifu Does Not Exist:


Okay, so what happens if you train a GAN on images that aren’t faces?

That turns out to be a lot harder. The real trick there is tagging the images, so the GAN knows what it’s looking at. That way you can, for example, teach it to give you a building when you ask it for a building, a face when you ask it for a face, and a cat when you ask it for a cat.

And that’s exactly what the folks at WOMBO have done. The WOMBO Dream app generates random images from any words or phrases you give it.

And I do mean “any” words or phrases.

It can generate cityscapes:




Body horror:

Abstract ideas:

On and on, endless varieties of images…I can play with it for hours (and I have!).

And believe me when I say it can generate images for anything you can think of. I’ve tried to throw things at it to stump it, and it’s always produced something that looks in some way related to whatever I’ve tossed its way.

War on Christmas? It’s got you covered:

I’ve even tried “Father Christmas encased in Giger sex tentacle:”

Not a bad effort, all things considered.

But here’s the thing:

If you look at these images, they’re all emotionally evocative; they all seem to get the essence of what you’re aiming at, but they lack detail. The parts don’t always fit together right. “Dream” is a good name: the images the GAN produces are hazy, dreamlike, insubstantial, without focus or particular features. The GAN clearly does not understand anything it creates.

And still, if artist twenty years ago had developed this particular style the old-fashioned way, I have no doubt that he or she or they would have become very popular indeed. AI is catching up to human capability in domains we have long thought required some spark of human essence, and doing it scary fast.

I’ve been chewing on what makes WOMBO Dream images so evocative. Is it simply promiscuous pattern recognition? The AI creating novel patterns we’ve never seen before by chewing up and spitting out fragments of things it doesn’t understand, causing us to dig for meaning where there isn’t any?

Given how fast generative machine learning programs are progressing, I am confident I will live to see AI-generated art that is as good as anything a human can do. And yet, I still don’t think the machines that create it will have any understanding of what they’re creating.

I’m not sure how I feel about that.

What squirrels taught me about post-scarcity societies

If you know any transhumanists or other forward-looking folks, you’ve probably encountered the notion of a “post-scarcity society.”

I just got back from a two-month writing retreat in a cabin deep in the heart of rural Washington, many miles from civilization. The squirrels at the cabin are quite talented at stealing birdseed from the bird feeders around the cabin, and that taught me a lesson about transhumanism and post-scarcity society.

This might make me a bad transhumanist, but I think the hype about post-scarcity society is overblown, and i think the more Panglossian among the transhumanists have a poor handle on this whole matter of fundamental human nature.

I’ve written an essay about it over on Think Beyond Us, which includes a video of squirrel warfare. Here’s a teaser:

We’re moving toward the technology to do things in a completely different way: using tiny machines to build stuff from a molecular or atomic level. In the book Engines of Creation, K. Eric Drexler envisions a time when we will be able to fabricate almost anything we can imagine from simple raw materials and energy.

And on this foundation, futurists say, post-scarcity society will be built. If we can make anything from any raw materials cheaply or free, there is no longer a divide between rich and poor. Think Las Vegas where everyone is a millionaire whale. Want a car? A sofa? A cup of tea? Program assemblers with the characteristics of the thing you want, push a button, and presto! There it is.

In a society where everyone can have whatever stuff they want and nobody has to work, entertainment becomes very important indeed. And those who can provide it—those who can write, or sing, or perform—well, they control access to the only resource besides land that means anything.

So what, then, do we make of a society where the 1% are determined not in accordance with how many resources they control, but how creative they are? A Utopian might say that anyone can learn to be creative and entertaining; a look around the history of humanity suggests that isn’t true.

Those who own land today command one of the few resources that will matter tomorrow. Those who can entertain command the only thing that can buy that resource. And the rest of humanity? Suddenly, Utopia starts to look a whole lot less Utopian to them, and a whole lot more like the same old same old.

Check it out! You can read the whole thing here.

Musings on being fucked: Christian millennialism and the Fermi paradox

When all the world’s armies are assembled in the valley that surrounds Mount Megiddo they will be staging a resistance front against the advancing armies of the Chinese. It will be the world’s worst nightmare – nuclear holocaust at its worst. A full-out nuclear bombardment between the armies of the Antichrist’s and the Kings of the East.

It is during this nuclear confrontation that a strange sight from the sky will catch their attention. The Antichrist’s armies will begin their defense in the Jezreel Valley in which the hill of Megiddo is located. […] At the height of their nuclear assault on the advancing armies something strange will happen.

Jesus predicted the suddenness of His return. He said, “For just as lightening comes from the east, and flashes even to the west, so shall the coming of the Son of Man be” (Matt. 24:27). And again He said, “…and then the sign of the Son of Man will appear in the sky, and then all the tribes of the earth shall mourn, and they will see the Son of Man coming in the clouds of heaven with power and great glory” (Matt. 24:30).
–Sherry Shriner Live

Believers must be active in helping to fulfill certain biblical conditions necessary to usher in the return of Christ. Key to this plan is for Gentiles to help accomplish God’s purpose for the Jews. […] Jesus is saying that His Second Coming will not take place until there is a Jewish population in Jerusalem who will welcome Him with all of their hearts.
— Johannes Facius, Hastening the Coming of the Messiah: Your Role in Fulfilling Prophecy

There is a problem in astronomy, commonly referred to as the Fermi paradox. In a nutshell, the problem is, where is everyone?

Life seems to be tenacious and ubiquitous. Wherever we look here on earth, we see life–even in the most inhospitable of places. The stuff seems downright determined to exist. When combined with the observation that the number of planetary systems throughout the universe seems much greater than even the most optimistic projections of, say, thirty years ago, it really seems quite likely that life exists out there somewhere. In fact, it seems quite likely that life exists everywhere out there. And given that sapient, tool-using life evolved here, it seems quite probable that sapient, tool-using life evolved somewhere else as well…indeed, quite often. (Given that our local galactic supercluster contains literally quadrillions of stars, if sapient life exists in only one one-hundredth of one percent of the places life evolved and if life evolves in only one one-hundredth of one percent of the places that have planets, the universe should be positively teeming with sapience.)

These aren’t stars. They’re galaxies. Where is everyone? (Image: Hubble Space Telescope)

When you’re sapient and tool-using, radio waves are obvious. It’s difficult to imagine getting much beyond the steam engine without discovering them. Electromagnetic radiation bathes the universe, and most any tool-using sapience will, sooner or later, stumble across it. All kinds of technologies create, use, and radiate electromagnetic radiation. So if there are sapient civilizations out there, we should see evidence of it–even if they aren’t intentionally attempting to communicate with anyone.

But we don’t.

So the question is, why not?

This is Fermi’s paradox, and researchers have proposed three answers: we’re first, we’re rare, or we’re fucked. I have, until now, been leaning toward the “we’re rare” answer, but more and more, I think the answer might be “we’re fucked.”

Let’s talk about the “first” or “rare” possibilities.

The “first” possibility posits that our planet is exceptionally rare, perhaps even unique–of all the planets around all the stars everywhere in the universe, no other place has the combination of ingredients (liquid water and so on) necessary for complex life. Alternately, life is common but sapient life is not. It’s possible; there’s nothing especially inevitable about sapience. Evolution is not goal-directed, and big brains aren’t necessarily a survival strategy more common or more compelling than any other. After all, we’re newbies. There was no sapient life on earth for most of its history.

Assuming we are that unique, though, seems to underestimate the number of planets that exist, and overestimate the specialness of our particular corner of existence. There’s nothing about our star, our solar system, or even our galaxy that sets it apart in any way we can see from any of a zillion others out there. And even if sapience isn’t inevitable–a reasonable assumption–if life evolved elsewhere, surely some fraction of it must have evolved toward sapience! With quadrillions of opportunities, you’d expect to see it somewhere else.

The “we’re rare” hypothesis posits that life is common, but life like what we see here is orders of magnitude less common, because something happened here that’s very unlikely even on galactic or universal scales. Perhaps it’s the jump from prokaryotes (cells without a nucleus) to eukaryotes (cells with a nucleus, which are capable of forming complex multicellular animals). For almost the entire history of life on earth, only single-celled life existed, after all; multicellular life is a recent innovation. Maybe the universe is teeming with life, but none of it is more complex than bacteria.

Depressing thought: The universe has us and these guys in it, and that’s it.

The third hypothesis is “we’re fucked,” and that’s the one I’m most concerned about.

The “we’re fucked” hypothesis suggests that sapient life isn’t everywhere we look because wherever it emerges, it gets wiped out. It might be that it gets wiped out by a spacefaring civilization, a la Fred Saberhagen’s Berserker science fiction stories.

But maybe…just maybe…it won’t be an evil extraterrestrial what does us in. Maybe tool-using sapience intrinsically contains the seeds of its own annihilation.

K. Eric Drexler wrote a book called Engines of Creation, in which he posited a coming age of nanotechnology that would offer the ability to manipulate, disassemble, and assemble matter at a molecular level.

It’s not as farfetched as it seems. You and I, after all, are vastly complex entities constructed from the level of molecules by programmable molecular machinery able to assemble large-scale, fine-grained structures from the ground up.

All the fabrication technologies we use now are, in essence, merely evolutionary refinements on stone knives and bearskins. When we want to make something, we take raw materials and hack at, carve, heat, forge, or mold them into what we want.

Even the Large Hadron Collider is basically just incremental small improvements on this

The ability to create things from the atomic level up, instead from big masses of materials down, promises to be more revolutionary than the invention of agriculture, the Iron Age, and the invention of the steam engine combined. Many of the things we take for granted–resources will always be scarce, resources must always be distributed unequally, it is not possible for a world of billions of people to have the standard of living of North America–will fade like a bad dream. Nanotech assembly offers the possibility of a post-scarcity society1.

It also promises to turn another deeply-held belief into a myth: Nuclear weapons are the scariest weapons we will ever face.

Molecular-level assembly implies molecular-level disassembly as well. And that…well, that opens the door to weapons of mass destruction on a scale as unimaginable to us as the H-bomb is to a Roman Centurion.

Cute little popgun you got there, son. Did your mom give you that?

Miracle nanotechnology notwithstanding, the course of human advancement has meant the distribution of greater and greater destructive power across wider and wider numbers of people. An average citizen today can go down to Wal-Mart and buy weapon technology that could have turned the tide of some of the world’s most significant historical battles. Even without nanotech, there’s no reason to think weapons technology and distribution just suddenly stopped in, say, 2006, and will not continue to increase from here on.

And that takes us to millennialist zealotry.

There are, in the world today, people who believe they have a sacred duty, given them by omnipotent supernatural entities, to usher in the Final Conflict between good and evil that will annihilate all the wicked with righteous fire, purging them from God’s creation. These millennialists don’t just believe the End is coming–they believe God has charged them with the task of bringing it about.

Christian millennialists long for nuclear war, which they believe will trigger the Second Coming. Some Hindus believe they must help bring about the end of days, so that the final avatar of Vishnu will return on a white horse to bring about the end of the current cycle and its corruption. In Japan, the Aum Shinrikyo sect believed it to be their duty to create the conditions for nuclear Armageddon, which they believed would trigger the ascendancy of the sect’s leader Shoko Asahara to his full divine status as the Lamb of God. Judaism, Islam, and nearly all other religious traditions have at least some adherents who likewise embrace the idea of global warfare that will cleanse the world of evil.

The notion of the purification of the world through violence is not unique to any culture or age–the ancient Israelites, for example, were enthusiastic fans of the notion–but it has particularly deep roots in American civic culture, and we export that idea all over the world. (The notion of the mythic superhero, for instance, is an embodiment of the idea of purifying violence, as the book Captain America and the Crusade Against Evil explains in some depth.)

I’m not suggesting that religious zealots have a patent on inventive destructiveness. From Chairman Mao to Josef Stalin, the 20th century is replete with examples of secular governments that are as gleefully, viciously bonkers as the most passionate of religious extremists.

But religious extremism does seem unique in one regard: we don’t generally see secularists embracing the fiery destruction of the entire world in order to cleanse os of evil. Violent secular institutions might want resources, or land, or good old-fashioned power, but they don’t usually seem to want to destroy the whole of creation in order to invoke a supernatural force to save it.

Putting it all together, we can expect that as time goes on, the trend toward making increasingly destructive technology available to increasingly large numbers of people will likely continue. Which means that, one day, we will likely arrive at the point where a sufficiently determined individual or small group of people can, in fact, literally unleash destruction on a global scale.

Imagine that, say, any reasonably motivated group of 100 or more people anywhere in the world could actually start a nuclear war. Given that millennialist end-times ideology is a thing, how safe would you feel?

It is possible, just possible, that we don’t see a ubniverse teeming with sapient, tool-using, radio-broadcasting, exploring-the-cosmos life because sapient tool-using species eventually reach the point where any single individual has the ability to wipe out the whole species, and very shortly after that happens, someone wipes out the whole species.

“But Franklin,” I hear you say, “even if there are human beings who can and will do that, given the chance, that doesn’t mean space aliens would! They’re not going to be anything like us!”

Well, right. Sure. Other sapient species wouldn’t be like us.

But here’s the thing: We are, it seems, pretty unremarkable. We live on an unremarkable planet orbiting an unremarkable star in an unremarkable corner of an unremarkable galaxy. We’re probably not special snowflakes; statistically, the odds are good that the trajectory we have taken is, um, unremarkable.

Yes, yes, they’re all unique and special…but they all have six arms, too.
(Image: National Science Foundation.)

Sure, sapient aliens might be, overall, less warlike and aggressive (or more warlike and aggressive!) than we are, but does that mean every single individual is? If we take millions of sapient tool-using intelligent species and give every individual of every one of those races the ability to push a button and destroy the whole species, how many species do you think would survive?

Perhaps the solution to the Fermi paradox is not that we’re first or we’re rare; perhaps we’re fucked. Perhaps we are rolling down a well-traveled groove, worn deep by millions of sapient species before us, a groove that ends in a predictable place.

I sincerely hope that’s not the case. But it seems possible it might be. Maybe, just maybe, our best hope to last as long as we can is to counter millennial thinking as vigorously as possible–not to save us, ultimately, but to buy as much time as we possibly can.

1Post-scarcity society of the sort that a lot of transhumanists talk about may never really be a thing, given there will always be something that is scarce, even if that “something” is intangible. Creativity, for instance, can’t be mass-produced. But a looser kind of post-scarcity society, in which material resources are abundant, does have some plausibility.

Some (More) Thoughts on Brain Modeling and the Coming Geek Rapture

The notion of “uploading”–analyzing a person’s brain and then modeling it, neuron by neuron, in a computer, thereby forever preserving that person’s knowledge and consciousness–is a fixture of transhumanist thought. In fact, self-described “futurists” like Ray Kurzweil will gladly expound at great length about how uploading and machine consciousness are right around the corner, and Any Day Now we will be able to live forever by copying ourselves into virtual worlds.

I’ve written extensively before about why I think that’s overly optimistic, and why Ray Kurzweil pisses me off. Our understanding of the brain is still remarkably poor–for example, we’re only just now learning how brain cells called “glial cells” are involved in the process of cognition–and even when we do understand the brain on a much deeper level, the tools for being able to map the connections between the cells in the brain are still a long way off.

In that particular post, I wrote that I still think brain modeling will happen; it’s just a long way off.

Now, however, I’m not sure it will ever happen at all.

I love cats.

Many people love cats, but I really love cats. It’s hard for me to see a cat when I’m out for a walk without wanting to make friends with it.

It’s possible that some of my love of cats isn’t an intrinsic part of my personality, in the sense that my personality may have been modified by a parasite commonly found in cats.

This is the parasite, in a color-enhanced scanning electron micrograph. Pretty, isn’t it? It’s called Toxoplasma gondii. It’s a single-celled organism that lives its life in two stages, growing to maturity inside the bodies of rats, and reproducing in the bodies of cats.

When a rat is infected, usually by coming into contact with cat droppings, the parasite grows but doesn’t reproduce. Its reproduction can only happen in a cat, which becomes infected when it eats an infected rat.

To help ensure its own survival, the parasite does something amazing. It controls the rat’s mind, exerting subtle changes to make the rat unafraid of cats. Healthy rats are terrified of cats; if they smell any sign of a cat, even a cat’s urine, they will leave an area and not come back. Infected rats lose that fear, which serves the parasite’s needs by making it more likely the rat will be eaten by a cat.

Humans can be infected by Toxoplasma gondii, but we’re a dead end for the parasite; it can’t reproduce in us.

It can, however, still work its mind-controlling magic. Infected humans show a range of behavioral changes, including becoming more generous and less bound by social mores and customs. They also appear to develop an affinity for cats.

There is a strong likelihood that I am a Toxoplasma gondii carrier. My parents have always owned cats, including outdoor cats quite likely to have been exposed to infected rats. So it is quite likely that my love for cats, and other, more subtle aspects of my personality (bunny ears, anyone?), have been shaped by the parasite.

So, here’s the first question: If some magical technology existed that could read the connections between all of my brain cells and copy them into a computer, would the resulting model act like me? If the model didn’t include the effects of Toxoplasma gondii infection, how different would that model be from who I am? Could you model me without modeling my parasites?

It gets worse.

The brain models we’ve built to date are all constructed from generic building blocks. We model neurons as though they are variations on a common theme, responding pretty much the same way. These models assume that the neurons in Alex’s head behave pretty much the same way as the neurons in Bill’s head.

To some extent, that’s true. But we’re learning that there can be subtle genetic differences in the way that neurons respond to different neurotransmitters, and these subtle differences can have very large effects on personality and behavior.

Consider this protein. It’s a model of a protein called AVPR-1a, which is used in brain cells as a receptor for the neurotransmitter called vasopressin.

Vasopressin serves a wide variety of different functions. In the body, it regulates water retention and blood pressure. In the brain, it regulates pair-bonding, stress, aggression, and social interaction.

A growing body of research shows that human beings naturally carry slightly different forms of the gene that produce this particular receptor, and that these tiny genetic differences result in tiny structural differences in the receptor which produce quite significant differences in behavior. For example, one subtle difference in the gene that produces this receptor changes the way that men bond to partners after sex; carriers of this particular genetic variation are less likely to experience intense pair-bonding, less likely to marry, and more likely to divorce if they do marry.

A different variation in this same gene produces a different AVPR-1a receptor that is strongly linked to altruistic behavior; people with that particular variant are far more likely to be generous and altruistic, and the amount of altruism varies directly with the number of copies of a particular nucleotide sequence within the gene.

So let’s say that we model a brain, and the model we use is built around a statistical computation for brain activation based on the most common form of the AVPR-1a gene. If we model the brain of a person with a different form of this gene, will the model really represent her? Will it behave the way she does?

The evidence suggests that, no, it won’t. Because subtle genetic variations can have significant behavioral consequences, it is not sufficient to upload a person using a generic model. We have to extend the model all the way down to the molecular level, modeling tiny variations in a person’s receptor molecules, if we wish to truly upload a person into a computer.

And that leads rise to a whole new layer of thorny moral issues.

There is a growing body of evidence that suggests that autism spectrum disorders are the result in genetic differences in neuron receptors, too. The same PDF I linked to above cites several studies that show a strong connection between various autism-spectrum disorders and differences in receptors for another neurotransmitter, oxytocin.

Vasopressin and oxytocin work together in complex ways to regulate social behavior. Subtle changes in production, uptake, and response to either or both can produce large, high-level changes in behavior, and specifically in interpersonal behavior–arguably a significant part of what we call a person’s “personality.”

So let’s assume a magic brain-scanning device able to read a person’s brain state and a magic computer able to model a person’s brain. Let’s say that we put a person with Asperger’s or full-blown autism under our magic scanner.

What do we do? Do we build the model with “normal” vasopressin and oxytocin receptors, thereby producing a model that doesn’t exhibit autism-spectrum behavior? If we do that, have we actually modeled that person, or have we created an entirely new entity that is some facsimile of what that person might be like without autism? Is that the same person? Do we have a moral imperative to model a person being uploaded as closely as possible, or is it more moral to “cure” the autism in the model?

In the previous essay, I outlined why I think we’re still a very long ways away from modeling a person in a computer–we lack the in-depth understanding of how the glial cells in the brain influence behavior and cognition, we lack the tools to be able to analyze and quantify the trillions of interconnections between neurons, and we lack the computational horsepower to be able to run such a simulation even if we could build it.

Those are technical objections. The issue of modeling a person all the way down to the level of genetic variation in neurotransmitter and receptor function, however, is something else.

Assuming we overcome the limitations of the first round of problems, we’re still left with the fact that there’s a lot more going on in the brain than generic, interchangeable neurons behaving in predictable ways. To actually copy a person, we need to be able to account for genetic differences in the structure of receptors in the brain…

…and even if we do that, we still haven’t accounted for the fact that organisms like Toxoplasma gondii can and do change the behavior of the brain to suit their own ends. (I would argue that a model of me that was faithful clear down to the molecular level probably wouldn’t be a very good copy if it didn’t include the effects that the parasite have had on my personality–effects that we still have no way to quantify.)

Sorry, Mr. Kurzweil, we’re not there yet, and we’re not likely to be any time soon. Modeling a specific person in a brain is orders of magnitude harder than you think it is. At this point, I can’t even say with certainty that I think it will ever happen.

Personhood Theory: A Primer

Quite some time ago, I wrote a blog post about the notion of inalienable rights, in which I mentioned the concept of personhood theory, an ethical structure that provides a framework for deciding what is and is not a “person.”

The idea of inalienable rights isn’t necessarily the same as the idea of personhood, though in most moral systems they’re certainly related. Most of us at least recognize the term “human rights,” and tend to think of them as being good things, and something separate from, say, animal rights.

Now, I will grant that the notion of human rights, if history is any example, is more of a pretty sound-bite than anything we as a species actually take seriously.

To quote from one of my favorite George Carlin skits: “Now, if you think you do have rights, one last assignment for you. Next time you’re at the computer, get on the Internet, go to Wikipedia. When you get to Wikipedia, in the search field for Wikipedia, I want you to type in “Japanese Americans 1942, and you’ll find out all about your precious fuckin’ rights, okay? …Just when these American citizens needed their rights the most, their government took ’em away. And rights aren’t rights if someone can take ’em away. They’re privileges. That’s all we’ve ever had in this country, a bill of temporary privileges.”

So it is with some skepticism, leavened with a dash of cynicism, that I talk about the notion of “rights” at all.

However, the fact that we tend not to be very good at respecting things like “human rights” doesn’t mean the idea has no value. In fact, quite the opposite; I think that the notion there are certain things which one simply should not be permitted to do to others, and certain things which all of us ought to be able to expect that we can do, is not only valuable but also absolutely essential–not just in an ethical sense, but in a practical sense too. I believe quite strongly that respecting the idea of “human rights” is not just a moral imperative; it has immediate, utilitarian benefits to the societies which respect them, and the more a society respects these ideas, the better (in many tangible ways) that society becomes.

But that’s a bit off the point. What I actually want to talk about is personhood theory specifically, rather than the idea of rights in general.

In the US these days, the idea of “personhood” has become conflated with the abortion debate. The Religious Right has been advocating the notion of “personhood” as a way to promote an anti-abortion agenda, so when i’ve talked about “personhood theory” in the last few months a lot of folks have assumed that what I’m talking about is abortion.

Personhood theory as an ethical framework isn’t (directly) related to abortion at all. As an ethical principle, the idea behind personhood theory is pretty straightforward: “Personhood,” and with it all the rights that we now call “human rights,” belongs to any sapient entity.

Put most simply, that means that a hypothetical intelligent alien organism, a hypothetical “strong” AI, a person whose consciousness has been transferred into a computer, or an animal that has been modified to be sapient would all qualify as “people” and would be entitled to the rights and responsibilities of people, just like you or I.

Now, there is one potential pickle in this definition, of course, and that’s in the notion of sapience.

It’s impossible to prove that a computer, or an uploaded person, or even your next door neighbor down the street is sapient. We can apply the Turing test to a computer to see if it can converse fluently and flexibly enough to be indistinguishable from a human being, but that presupposes that artificial intelligence would be similar to natural intelligence, which isn’t necessarily so. We can test generalized problem-solving capability, though it’s possible to imagine that what looks to be intelligent problem-solving is actually brute-force, blind pattern matching done very quickly, of the kind that a computer chess-playing program does.

But ultimately, it may not really matter that we can’t ever come up with a way to step into the subjective experience of an alien or an uplifted animal or a computer and say that it is sapient, because we can’t do that with a person, either.

I can’t be absolutely, 100% certain that I am not the only person in the world with self-awareness and a rich subjective internal experience. It might be that my neighbor and the clerk at the convenience store down the street and the cute blond lesbian with facial piercings who used to work in the sandwich shop near me are actually “philosophical zombies,” utterly devoid of any internal experience, repeating words and phrases, paying taxes, doing their jobs only through some kind of incredibly complex clockwork. It doesn’t matter because when I make ethical decisions, the negative effects of assuming everyone else to be an empty clockwork shell, should I be wrong, are much more profound than the ethical consequences if I assume that they are aware, living people and I am wrong. The ethical principle of least harm demands that if they seem to be people, I treat them as people. The alternative is sociopathy.

The same moral logic applies to uploaded people and smart computers. No, I can not objectively prove that they are self-aware entities instead of fabulous automatons, so basic ethics demand that if they appear to be self-aware entities, I treat them as I would treat self-aware entities.

All this is, I believe, a pretty straightforward idea. But the concept of personhood theory often runs off the rails when people, particularly socially or religious people, talk about it, for reasons that I find very, very interesting.

The arch-conservative, Creation “Science” Discovery Institute says of personhood theory, “In this new view on life, each human being doesn’t have moral worth simply and merely because he or she is human, but rather, we each have to earn our rights by possessing sufficient mental capacities to be considered a person. Personhood theory provides moral justification to oppress and exploit the most vulnerable human beings.”

An that takes a similar approach article in SFGate says, “Relying on personhood instead of humanhood as the fundamental basis for determining moral worth threatens the lives and well-being of the most defenseless and vulnerable humans among us. Here’s why: In personhood theory, taking life is only wrong if the being killed was a “person” who wanted to remain alive. […] Basing public policy on such theories leads to very dark places. Some bioethicists justify the killing of Alzheimer’s patients and infants born with disabilities. Others suggest that people in comas can be killed and their organs harvested if their families consent, or used in medical experiments in place of animals.”

Self-described ethicist Wesley J. Smith, who has worked with the Discovery Institute, claims that personhood theory is nothing more than an attempt to legalize infanticide: “‘After-Birth Abortion’ is merely the latest example of bioethical argument wielded as the sharp point of the spear in an all-out philosophical war waged among the intelligentsia against Judeo/Christian morality based in human exceptionalism and adherence to universal human rights. In place of intrinsic human dignity as the foundation for our culture and laws, advocates of the new bioethical order want moral value to be measured individual-by-individual — whether animal or human — and moment-by-moment. Under this view, we each must earn full moral status by currently possessing capacities sufficient to be deemed a ‘person.'”

Now, I will admit that when I first heard of some of these objections to personhood theory, I was absolutely gobsmacked. It seemed beyond all reason to misinterpret and misrepresent what, to me, seemed like such a simple idea in such a profound way.

But the more I thought about it, the more it made sense that people would interpret personhood theory in such a bizarre, backwards way…because the principle idea simply does not fit into their conceptual worldview. They interpret the idea incorrectly because their frame of reference doesn’t permit them to view it as it was intended.

The gist of personhood theory is expansive. It expands the conventional definition of “person” beyond “human,” to include a number of hypothetical non-human entities, should they ever exist. Personhood theory says “It’s not just human beings who are persons; anything which is sapient is a person, too.”

The objections to personhood theory see it as a constrictive or limiting framework. This suggests to me that these objections betray a worldview in which human beings are the only things which are persons, so any definition of the word “person” that is not “a human being” must necessarily limit personhood to only a subset of human beings.

It is trivially demonstrable, even if we can not objectively state with absolute certainty, that something is sapient, that all of us at some time or another are not sapient. A human being who is under general anesthesia would fail any test for sapience, or indeed awareness of any sort. A sleeping person is less sentient than an awake dog. I myself am rarely sapient before 9 AM under the best of circumstances. (It is beyond the scope of this discussion to ponder whether a person who is in an irreversible coma or whose mind has been destroyed by Alzheimer’s still has the same rights as any other person; whether or not things like euthanasia are ethical is irrelevant to the concept of personhood theory as I am discussing it.)

Personhood theory, at least in its original formulation, clearly applies only to classes of entities, not to individuals within a class. So for example, human beings are sapient, regardless of the fact that each of us experiences transient non-sapience from time to time; ergo, human beings are people. Strong AIs, if they ever exist, would (by definition) be sapient, even if individual AIs themselves were to be disabled or shut down or whatever; therefore, strong AIs are people.

Personhood theory as a construct works on a general, not an individual, level. No transhumanist or bioethicist who talks about personhood theory proposes that it can be used to justify shooting sleeping people on the basis that they aren’t sapient and are therefore not really people; such an interpretation is, on the face of it, absurd. (I will leave it as an exercise to the reader as to whether or not it’s more absurd than the notion that dinosaurs lived in the Garden of Eden and were present on Noah’s ark.)

Rather, transhumanists and bioethicists who talk about personhood theory–at least in my experience–use it as a way to construct some sort of system for deciding who else gets “human” rights in addition to human beings, with the obvious candidates being the ones I’ve mentioned.

There is, though I hate to say this, particular irony in Wesley Smith’s talk of “Judeo/Christian morality based in human exceptionalism and adherence to universal human rights,” considering the Judeo/Christian track record on such issues as slavery. “Universal human rights,” in the Judeo/Christian literature, are anything but universal. The cynic in me is reluctant to place the application of universal rights to anyone, much less non-human entities, in the care of conservative guardians of Judeo/Christian morality.

It took quite a long time for people to figure out that human beings with a different color of skin were people; the Southern Baptist Convention was doctrinally white supremacist until after WWII, and the Mormon church was doctrinally white supremacist until 1977. To this very day, the Discovery Institute seeks to deny “universal human rights” to gays and lesbians, using one of the most bizarre chains of logic I’ve ever witnessed outside of questions about how we know dinosaurs and human beings shared the same space at the same time.

I frankly do not envy the first uploaded person or the first true AI. Any non-human sapience will, if history is any guide, have a rough time being treated as anything other than property. The people who object to personhood theory because they see it as a constriction rather than an expansion of the idea of personhood are, I think, quite literally incapable of recognizing the personhood of something like an AI; it exists so far outside their worldview that the argument doesn’t even seem to make sense to them.

And in a world where strong AI exists, I fear for what that means for us, and what that says about our abilities as moral entities.

Some thoughts on post-scarcity societies

One of my favorite writers at the moment is Iain M. Banks. Under that name, he writes science fiction set in a post-scarcity society called the Culture, where he deals with political intrigue and moral issues and technology and society on a scale that almost nobody else has ever tried. (In fact, his novel Use of Weapons is my all-time favorite book, and I’ve written about it at great length here.) Under the name Iain Banks, he writes grim and often depressing novels not related to science fiction, and wins lots of awards.

The Culture novels are interesting to me because they are imagination writ large. Conventional science fiction, whether it’s the cyberpunk dystopia of William Gibson or the bland, banal sterility of (God help us) Star Trek, imagines a world that’s quite recognizable to us….or at least to those of us who are white 20th-century Westerners. (It’s always bugged me that the alien races in Star Trek are not really very alien at all; they are more like conventional middle-class white Americans than even, say, Japanese society is, and way less alien than the Serra do Sol tribe of the Amazon basin.) They imagine a future that’s pretty much the same as the present, only more so; “Bones” McCoy, a physician, talks about how death at the ripe old age of 80 is part of Nature’s plan, as he rides around in a spaceship made by welding plates of steel together.

Image from Wikimedia Commons by Hill – Giuseppe Gerbino

In the Culture, by way of contrast, everything is made by atomic-level nanotech assembly processes. Macroengineering exists on a huge scale, so huge that the majority of the Culture’s citizens by far live on orbitals–artificially constructed habitats encircling a star. (One could live on a planet, of course, in much the way that a modern person could live in a cave if she wanted to; but why?) The largest spacecraft, General Systems Vehicles, have populations that range from the tens of millions ot six billion or more. Virtually limitless sources of energy (something I’m panning to blog about later) and virtually unlimited technical ability to make just about anything from raw atoms means that there is no such thing as scarcity; whatever any person needs, that person can have, immediately and for free. And the definition of “person” goes much further, too; whereas in the Star Trek universe, people are still struggling with the idea that a sentient android might be a person, in the Culture, personhood theory (something else about which I plan to write) is the bedrock upon which all other moral and ethical systems are built. Many of the Culture’s citizens are drones or Minds–non-biological computers, of a sort, that range from about as smart as a human to millions of times smarter. Calling them “computers” really is an injustice; it’s about on par with calling a modern supercomputer a string of counting beads. Spacecraft and orbitals are controlled by vast Minds far in advance of unaugmented human intellect.

I had a dream, a while ago, about the Enterprise from Star Trek encountering a General Systems Vehicle, and the hilarity that ensued when they spoke to each other: “Why, hello, Captain Kirk of the Enterprise! I am the GSV Total Internal Reflection of the Culture. You came here in that? How…remarkably courageous of you!”

And speaking of humans…

The biological people in the Culture are the products of advanced technology just as much as the Minds are. They have been altered in many ways; their immune systems are far more resilient, they have much greater conscious control over their bodies; they have almost unlimited life expectancies; they are almost entirely free of disease and aging. Against this backdrop, the stories of the Culture take place.

Banks has written a quick overview of the Culture, and its technological and moral roots, here. A lot of the Culture novels are, in a sense, morality plays; Banks uses the idea of a post-scarcity society to examine everything from bioethics to social structures to moral values.

In the Culture novel, much of the society is depicted as pretty Utopian. Why wouldn’t it be? There’s no scarcity, no starvation, no lack of resources or space. Because of that, there’s little need for conflict; there’s neither land nor resources to fight over. There’s very little need for struggle of any kind; anyone who wants nothing but idle luxury can have it.

For that reason, most of the Culture novels concern themselves with Contact, that part of the Culture which is involved with alien, non-Culture civilizations; and especially with Special Circumstances, that part of Contact whose dealings with other civilizations extends into the realm of covert manipulation, subterfuge, and dirty tricks.

Of which there are many, as the Culture isn’t the only technologically sophisticated player on the scene.

But I wonder…would a post-scarcity society necessarily be Utopian?

Banks makes a case, and I think a good one, for the notion that a society’s moral values depend to a great extent on its wealth and the difficulty, or lack thereof, of its existence. Certainly, there are parallels in human history. I have heard it argued, for example, that societies from harsh desert climates produce harsh moral codes, which is why we see commandments in Leviticus detailing at great length and with an almost maniacal glee who to stone, when to stone them, and where to splash their blood after you’ve stoned them. As societies become more civil more wealthy, as every day becomes less of a struggle to survive, those moral values soften. Today, even the most die-hard of evangelical “execute all the gays” Biblical literalist rarely speaks out in favor of stoning women who are not virgins on their wedding night, or executing people for picking up a bundle of sticks on the Sabbath, or dealing with the crime of rape by putting to death both the rapist and the victim.

I’ve even seen it argued that as civilizations become more prosperous, their moral values must become less harsh. In a small nomadic desert tribe, someone who isn’t a team player threatens the lives of the entire tribe. In a large, complex, pluralistic society, someone who is too xenophobic, too zealous in his desire to kill anyone not like himself, threatens the peace, prosperity, and economic competitiveness of the society. The United States might be something of an aberration in this regard, as we are both the wealthiest and also the most totalitarian of the Western countries, but in the overall scope of human history we’re still remarkably progressive. (We are becoming less so, turning more xenophobic and rabidly religious as our economic and military power wane; I’m not sure that the one is directly the cause of the other but those two things definitely seem to be related.)

In the Culture novels, Banks imagines this trend as a straight line going onward; as societies become post-scarcity, they tend to become tolerant, peaceful, and Utopian to an extreme that we would find almost incomprehensible, Special Circumstances aside. There are tiny microsocieties within the Culture that are harsh and murderously intolerant, such as the Eaters in the novel Consider Phlebas, but they are also not post-scarcity; the Eaters have created a tiny society in which they have very little and every day is a struggle for survival.

We don’t have any models of post-scarcity societies to look at, so it’s hard to do anything beyond conjecture. But we do have examples of societies that had little in the way of competition, that had rich resources and no aggressive neighbors to contend with, and had very high standards of living for the time in which they existed that included lots of leisure time and few immediate threats to their survival.

One such society might be the Aztec empire, which spread through the central parts of modern-day Mexico during the 14th century. The Aztecs were technologically sophisticated and built a sprawling empire based on a combination of trade, military might, and tribute.

Because they required conquered people to pay vast sums of tribute, the Aztecs themselves were wealthy and comfortable. Though they were not industrialized, they lacked for little. Even commoners had what was for the time a high standard of living.

And yet, they were about the furthest thing from Utopian it’s possible to imagine.

The religious traditions of the Aztecs were bloodthirsty in the extreme. So voracious was their appetite for human sacrifices that they would sometimes conquer neighbors just to capture a steady stream of sacrificial victims. Commoners could make money by selling their daughters for sacrifice. Aztec records document tens of thousands of sacrifices just for the dedication of a single temple.

So they wanted for little, had no external threats, had a safe and secure civilization with a stable, thriving economy…and they turned monstrous, with a contempt for human life and a complete disregard for human value that would have made Pol Pot blush. Clearly, complex, secure, stable societies don’t always move toward moral systems that value human life, tolerate diversity, and promote individual dignity and autonomy. In fact, the Aztecs, as they became stronger, more secure, and more stable, seemed to become more bloodthirsty, not less. So why is that? What does that say about hypothetical societies that really are post-scarcity?

One possibility is that where there is no conflict, people feel a need to create it. The Aztecs fought ritual wars, called “flower wars,” with some of their neighbors–wars not over resources or land, but whose purpose was to supply humans for sacrifice.

Now, flower wars might have had a prosaic function not directly connected with religious human sacrifice, of course. Many societies use warfare as a means of disposing of populations of surplus men, who can otherwise lead to social and political unrest. In a civilization that has virtually unlimited space, that’s not a problem; in societies which are geographically bounded, it is. (Even for modern, industrialized nations.)

Still, religion unquestionably played a part. The Aztecs were bloodthirsty at least to some degree because they practiced a bloodthirsty religion, and vice versa. This, I think, indicates that a society’s moral values don’t spring entirely from what is most conducive to that society’s survival. While the things that a society must do in order to survive, and the factors that are most valuable to a society’s functioning at whatever level it finds itself, will affect that society’s religious beliefs (and those beliefs will change to some extent as the needs of the society change), there would seem to be at least some corner of a society’s moral structures that are entirely irrational and completely divorced from what would best serve that society. The Aztecs may be an extreme example of this.

So what does that mean to a post-scarcity society?

It means that a post-scarcity society, even though it has no need of war or conflict, may still have both war and conflict, despite the fact that they serve no rational role. There is no guarantee that a post-scarcity society necessarily must be a rationalist society; while reaching the point of post scarcity does require rationality, at least in the scientific and technological arts, there’s not necessarily any compelling reason to assume that a society that has reached that point must stay rational.

And a post=scarcity society that enshrines irrational beliefs, and has contempt for the value of human life, would be a very scary thing indeed. Imagine a society of limitless wealth and technological prowess that has a morality based on a literalistic interpretation of Leviticus, for instance, in which women really are stoned to death if they aren’t virgins on their wedding night. There wouldn’t necessarily be any compelling reason for a post-scarcity society not to adopt such beliefs; after all, human beings are a renewable resource too, so it would cost the society little to treat its members with indifference.

As much as I love the Culture (and the idea of post-scarcity society in general), I don’t think it’s a given that they would be Utopian.

Perhaps as we continue to advance technologically, we will continue to domesticate ourselves, so that the idea of being pointlessly cruel and warlike would seem quite horrifying to our descendants who reach that point. But if I were asked to make a bet on it, I’m not entirely sure which way I’d bet.

What is transhumanism?

A couple of weeks ago, I realized that I spend a fair bit of time both here in my blog and over on my Web site writing about transhumanism, but I’ve never actually written an article explaining what it is.

Wikipedia defines transhumanism as “an international intellectual and cultural movement that affirms the possibility and desirability of fundamentally transforming the human condition by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.” That’s true in a sort of reductionist sense, but I’m not sure it’s a terribly useful definition.

If I were to define transhumanism, I’d say that it’s an idea whose premise is that human nature is not some fixed quantity, forever unalterable; it’s something that is a consequence of our biology and our environment, and it can be changed. Furthermore, advances in technology and in our understanding of biology, chemistry, and physics, give us the power to change it as we wish–to take evolution from a blind, undirected process to a process that we can make choices about. It’s predicated on the idea that we can, if we so desire, choose what it means to be human.

A great deal of conventional thought has always held on to the idea that “human nature” is something that’s a fundamental part of who we are, forever unalterable. Certain aspects of the human condition, from mortality to aggression, from disease to territoriality, have always been thought of as fixtures of the human condition; no matter how our society changes, no matter what we learn, these things have been assumed to be an immutable part of us.

Transhumanist thought holds that this isn’t so. We are physical entities, whose nature comes from an extraordinarily complex dance of biochemical processes happening in our bodies. The way we respond to stress, the way we behave, the way our bodies suffer gradually increasing debility, all these things are the consequence of the physical processes happening inside our bodies and brains.

And they can change. Improved diet has made us qualitatively different from our neolithic ancestors–taller, longer-lived. Thousands of generations living in large numbers have made us more able to function in complex social environments; we have, in a sense, domesticated ourselves.

Right now, advances in biotechnology offer to revolutionize our view of who we are. What if aging and death were no longer inevitable? What if we could invent ways to repair genetic disorders? What if the human brain, which is a physical organ, could be modeled inside a computer? What if we could develop techniques to make our brains operate more efficiently? These sound like science fiction to a lot of people, but every single one of them is the subject of active research in labs around the world right now.

Transhumanism is a highly rationalist idea. It rejects the notion that human beings are corrupt, doomed to suffer and die as a result of a fall from grace. Rather, it postulates that the things that make us who we are are knowable and comprehensible; that the state of being human is a fit subject for scientific inquiry; and that as we learn more about ourselves, our ability to shape who we are increases.

The implications of these ideas are deeply profound. Transhumanist philosophy is built from the notion that things like indefinite lifespan, brain modeling, and improvement of human physical and intellectual capacity are both possible and desirable. Transhumanism, therefore, is profoundly optimistic.

It is not, however, Utopian. Like all new technologies, these things all have potential consequences whose outlines we can’t see clearly yet. Therefore, transhumanism tends to be concerned not only with the possibility of biomedical technology but also its ethics; the study of transhumanism is, in large part, the study of bioethics. Who controls the direction of new, disruptive biomedical technology? What does it mean to be a “person;” is an artificial intelligence a person? How should new biomedical technology be introduced into society? How can it be made available democratically, to everyone who wishes it? What role is available to people who for whatever reason don’t choose to benefit from new advances in medical understanding?

At its core, transhumanism is deeply pragmatic. Since it seems likely that biotechnology is going to improve over time whether we think about the implications of it or not, transhumanists think about things like bioethics, immortality, and the nature of consciousness in concrete, real-world terms, rather than as philosophical exercises. One of the things I most like about transhumanism is its drive to ask questions like “How can we maximize the benefit of what we are learning while maintaining human agency, dignity, and the right to choose?” Transhumanists are invited to be skeptical about everything, including the premises of transhumanism. It is quite likely that whatever views of the future we dream up will be flawed, as most prognostication tends to be. But by getting into the habit of examining these ideas now, and of considering the moral and ethical dimensions of our accelerating understanding of biology, we can at least train ourselves to get into the habit of asking the right questions as new breakthroughs come.

Another podcast interview!

A couple weeks back, I had the opportunity to meet Shira B. Katz of the Pedestrian Polyamory podcast. What followed was a conversation about polyamory, transhumanism, mad science, programmable sex toys, and all sorts of other stuff, which you can listen to here.

The interview was great fun. I’d definitely love to do it again.

Transhumanism, Technology, and the da Vinci Effect

[Note: There is a followup to this essay here]

Ray Kurzweil pisses me off.

His name came up last night at Science Pub, which is a regular event, hosted by a friend of mine, that brings in guest speakers on a wide range of different science and technology related topics to talk in front of an audience at a large pub. There’s beer and pizza and really smart scientists talking about things they’re really passionate about, and if you live in Portland, Oregon (or Eugene or Hillsboro; my friend is branching out), I can’t recommend them enough.

Before I can talk about why Ray Kurzweil pisses me off–or, more precisely, before I can talk about some of the reasons Ray Kurzweil pisses me off, as an exhaustive list would most surely strain my patience to write and your patience to read–it is first necessary to talk about what I call the “da Vinci effect.”

Leonardo da Vinci is, in my opinion, one of the greatest human beings who has ever lived. He embodies the best in our desire to learn; he was interested in painting and sculpture and anatomy and engineering and just about every other thing worth knowing about, and he took time off of creating some of the most incredible works of art the human species has yet created to invent the helicopter, the armored personnel carrier, the barrel spring, the Gatling gun, and the automated artillery fuze…pausing along the way to record innovations in geography, hydraulics, music, and a whole lot of other stuff.

However, most of his inventions, while sound in principle, were crippled by the fact that he could not conceive of any power source other than muscle power. The steam engine was still more than two and a half centuries away; the internal combustion engine, another half-century or so after that.

da Vinci had the ability to anticipate the broad outlines of some really amazing things, but he could not build them, because he lacked one essential element whose design and operation were way beyond him or the society he lived in, both in theory and in practice.

I tend to call this the “da Vinci effect”–the ability to see how something might be possible, but to be missing one key component that’s so far ahead of the technology of the day that it’s not possible even to hypothesize, except perhaps in broad, general terms, how it might work, and not possible even to anticipate with any kind of accuracy how long it might take before the thing becomes reachable.

Charles Babbage’s Difference Engine is another example of an idea whose realization was held back by the da Vinci effect.

Babbage reasoned–quite accurately–that it was possible to build a machine capable of mathematical computation. He also reasoned that it would be possible to construct such a machine in such a way that it could be fed a program–a sequence of logical steps, each representing some operation to carry out–and that on the conclusion of such a program, the machine would have solved a problem. Ths last bit differentiated his conception of a computational engine from other devices (such as the Antikythera mechanism) which were built to solve one particular problem and could not be programmed.

The technology of the time, specifically with respect to precision metal casting, meant his design for a mechanical computer was never realized in his lifetime. Today, we use devices that operate by principles he imagined every day, but they aren’t mechanical; in place of gears and levers, they use gates that control the flow of electrons–something he could never have envisioned given the understanding of his time.

One of the speakers at last night’s Science Pub was Dr. Larry Sherman, a neurobiologist and musician who runs a research lab here in Oregon that’s currently doing a lot of cutting-edge work in neurobiology. He’s one of my heroes1; I’ve seen him present several times now, and he’s a fantastic speaker.

Now, when I was in school studying neurobiology, things were very simple. You had two kinds of cells in your brain: neurons, which did all the heavy lifting involved in the process of cognition and behavior, and glial cells, which provided support for the neurons, nourished them, repaired damage, and cleaned up the debris from injury or dead cells.

There are a couple of broad classifications for glial cells: astrocytes and microglia. Astrocytes, shown in green in this picture, provide a physical scaffold to hold neurons (in blue) in place. They wrap the axons of neurons in protective sheaths and they absorb nutrients and oxygen from blood vessels, which they then pass on to the neurons. Microglia are cells that are kind of like little amoebas; hey swim around in your brain locating dead or dying cells, pathogens, and other forms of debris, and eating them.

So that’s the background.

Ray Kurzweil is a self-styled “futurist,” transhumanist, and author. He’s also a Pollyanna with little real rubbber-on-road understanding of the challenges that nanotechnology and biotechnology face. He talks a great deal about AI, human/machine interface, and uploading–the process of modeling a brain in a computer such that the computer is conscious and aware, with all the knowledge and personality of the person being modeled.

He gets a lot of it wrong, but it’s the last bit he gets really wrong. Not the general outlines, mind you, but certainly the timetable. He’s the guy who looks at da Vinci’s notebook and says “Wow, a flying machine? That’s awesome! Look how detailed these drawings are. I bet we could build one of these by next spring!”

Anyway, his name came up during the Q&A at Science Pub, and I kind of groaned. Not as much as I did when Dr. Sherman suggested that a person whose neurons had been replaced with mechanical analogues wouldn’t be a person any more, but I groaned nonetheless.

Afterward, I had a chance to talk to Dr. Sherman briefly. The conversation was short; only just long enough for him to completely blow my mind, make me believe that a lot of ideas about uploading are limited by the da Vinci effect, and to suggest that much brain modeling research currently going on is (in his words) “totally wrong”.

It turns out that most of what I was taught about neurobiology was utterly wrong. Our understanding of the brain has exploded in the last few decades. We’ve learned that people can and do grow new brain cells all the time, throughout their lives. And we’ve learned that the glial cells do a whole lot more than we thought they did.

Astrocytes, long believed to be nothing but scaffolding and cafeteria workers, are strongly implicated in learning and cognition, as it turns out. They not only support the neurons in your brain, but they guide the process of new neural connections, the process by which memory and learning work. They promote the growth of new neural pathways, and they also determine (at least to some degree) how and where those new pathways form.

In fact, human beings have more different types of astrocytes than other vertebrates do. Apparently, according to my brief conversation with Dr. Sherman, researchers have taken human astrocytes and implanted them in developing mice, and discovered an apparent increase in cognitive functions of those mice even though the neurons themselves were no different.

And, more recently, it turns out that microglia–the garbage collectors and scavengers of the brain–can influence high-order behavior as well.

The last bit is really important, and it involves hox genes.

A quick overview of hox genes. These are genes which control the expression of other genes, and which are involved in determining how an organism’s body develops. You (and monkeys and mice and fruit flies and earthworms) have hox genes–pretty much the same hox genes, in fact–that represent an overall “body image plan”. The do things like say “Ah, this bit will become a torso, so I will switch on the genes that correspond to forming arms and legs here, and switch off the genes responsible for making eyeballs or toes.” Or “This bit is the head, so I will switch on the eyeball-forming genes and the mouth-forming genes, and switch off the leg-forming genes.”

Mutations to hox genes generally cause gross physical abnormalities. In fruit flies, incoreect hox gene expression can cause the fly to sprout legs instead of antennae, or to grow wings from strange parts of its body. In humans, hox gene malfunctions can cause a number of really bizarre and usually fatal birth defects–growing tiny limbs out of eye sockets, that sort of thing.

And it appears that a hox gene mutation can result in obsessive-compulsive disorder.

And more bizarrely than that, this hox gene mutation affects the way microglia form.

Think about how bizarre that is for a minute. The genes responsible for regulating overall body plan can cause changes in microglia–little amoeba scavengers that roam around in the brain. And that change to those scavengers can result in gross high-level behavioral differences.

Not only are we not in Kansas any more, we’re not even on the same continent. This is absolutely not what anyone would expect, given our knowledge of the brain even twenty years ago.

Which brings us back ’round to da Vinci.

Right now, most attempts to model the brain look only at the neurons, and disregard the glial cells. Now, there’s value to this. The brain is really (really really really) complex, and just developing tools able to model billions of cells and hundreds or thousands of billions of interconnections is really, really hard. We’re laying the foundation, even with simple models, that lets us construct the computational and informatics tools for handling a problem of mind-boggling scope.

But there’s still a critical bit missing. Or critical bits, really. We’re missing the computational bits that would allow us to model a system of this size and scope, or even to be able to map out such a system for the purpose of modeling it. A lot of folks blithely assume Moore’s Law will take care of that for us, but I’m not so sure. Even assuming a computer of infinite power and capability, if you want to upload a person, you still have the task of being able to read the states and connection pathways of many billions of very small cells, and I’m not convinced we even know quite what those tools look like yet.

But on top of that, when you consider that we’re missing a big part of the picture of how cognition happens–we’re looking at only one part of the system, and the mechanism by which glial cells promote, regulate, and influence high-level cognitive tasks is astonishingly poorly understood–it becomes clear (at least to me, anyway) that uploading is something that isn’t going to happen soon.

We can, like da Vinci, sketch out the principles by which it might work. There is nothing in the laws of physics that suggest it can’t be done, and in fact I believe that it absolutely can and will, eventually, be done.

But the more I look at the problem, the more it seems to me that there’s a key bit missing. And I don’t even think we’re in a position yet to figure out what that key bit looks like, much less how it can be built. It may be possible that when we do model brains, the model isn’t going to look anything like what we think of as a conventional computer at all, much like when we built general-purpose programmable devices, they didn’t look like Babbage’s difference engines at all.

1 Or would be, if it weren’t for the fact that he rejects personhood theory, which is something I’m still a bit surprised by. If I ever have the opportunity to talk with him over dinner, I want to discuss personhood theory with him, oh yes.