Dreaming of Transhumanism Remixes

Last night, I had a very long, incredibly detailed, and incredibly high resolution dream about Battlestar: Galactica.

Well, kinda sorta.

This isn’t actually a post about BSG, though the show is definitely a springboard for it. I liked the show a great deal, but in truth didn’t much care for the take-away lesson from the last episode, which I felt ended on a very science-negative, tech-negative, anti-transhumanist note.

Okay, so they wanted to start with a clean slate. I get that. They wanted to make a different decision, to break what they believed was an unending cycle of violence. I get that, too.

Still, I expected more from the show than Lee’s “no city, no ships, let’s fly the entire fleet into the sun, then let’s go down to the native, pre-stone-age, pre-language indigenous population and live like they do” speech. After the uniformly high quality of the writing to this point, the “technology is the problem, let go and let God” ending struck me as jarring, preachy, and unnecessarily trite.

And speaking of preachy, resolving the show’s unanswered questions the way they did seemed a little silly. What was the new Kara? An angel from God What was the Six in Baltar’s head? An angel from God. What was the destiny of the Cylons? Destruction by God, via the hand (literally) of a dead Raptor crew. C’mon, guys, we’ve invested years in this show. Don’t we deserve a better payoff?

The ending of the show, in which the remaining humans renounced technology in favor of the path of the “noble savage,” really seemed to play on anti-intellectualism to me. The notion that the recurrent cycle of human-on-Cylon violence was writ inevitably into the human story in particular galled me; it seemed to accept as a given that late-20th-century Western notions about robotics and AI, and assumptions about the role sentient computers would play in society, were the only possibility.

Apparently, my subconscious agreed.


So, on to the dream. My subconscious mind apparently decided to do a complete rewrite of the last episode, one more focused on looking ahead rather than looking behind.

It started with getting the Galactica (or at least most of it) on the ground in one piece. I have no idea how that would happen, so I’ll just handwave it away, a noble tradition in mainstream science fiction.

From there, things took a turn for the weird.

My dream did take some liberties with the technical capabilities of the ship, specifically with regard to biomedical tech.

Something I think is eminently reasonable, by the way. I mean, seriously, c’mon. We have a society with faster than light travel and strong AI and they can’t cure cancer? WTF is up with that?

I will never, ever understand why every mainstream space sci-fi always seems to assume that wonderful advances in technology and science don’t apply to basic biology or medicine. Somebody, please explain to me: why, oh why do we see countless stories and shows in which people are seen flying through space faster than light, teleporting around, visiting other dimensions, and they still can’t cure a simple viral infection, much less cancer?

We see all this tech, and somehow in this storyline the state of medicine permanently stalled at 1982? I’ve got an iPhone in my pocket that is more miraculous than any of the gadgets the Star Trek writers could conceive, and the state of medical technology is advancing faster than any other form of knowledge save for computer tech, but somehow in centuries hence when we have interstellar flight and people take planet-hopping for granted, folks will still only live to about 70 years old or so and will be felled by swine flu? Jesus H-for-hypothetical Christ on a pogo stick, that’s ridiculous!

But I digress.

Anyway, the one area of tech I assumed would be available is some form of basic suspended animation. With Galactica safely ensconced in bedrock in some place where there’s not a lot of tectonic activity, we’re set to do something a little more reasonable to help try to break the cycle of “all of this has happened before, all of it will happen again:” intelligent, thoughtful planning.

The basic premise is straightforward: This indigenous, pre-language, pre-literate society is quite likely, if it survives, to develop technology, and once that happens, it tends to follow an exponential upward curve. So if you’re convinced that technology will inevitably cause the cycle to repeat, and you also have a sapient species, well, you’re in a bit of a pickle…

…unless somehow you can use your knowledge of what’s to come to steer away from the iceberg, y’know?

So in the dream, I and a handful of the crew resolved to do exactly that, sleeping in suspended animation in shifts for generations at a time and then coming out to apply a few nudges to society to steer it away from the attitudes that lead to creating sapient slaves and then having them rebel.

One of the key philosophies in that job–something that we see the characters (and possibly the writers) in Battlestar: Galactica groping toward but never quite reaching–is the idea of personhood theory.


Personhood theory at its most simple is just the idea that if something reasons and thinks, it’s a person. Human beings have the ability to think; that makes human beings (including human beings which various societies throughout history have denied personhood–blacks, Jews, women, and so forth) persons.

Sapient robots reason; that makes them persons.

Intelligent computers reason; that makes them persons.

Clones of human beings reason; that makes them persons.

You get the idea. It’s not a terribly complex concept.

Persons, whatever form they may take, have certain rights, simply by virtue of being persons. You don’t, for example, treat persons as property. Slavery isn’t cool. A reasoning machine? A person, and therefore, not a slave.

Cycle broken.

In my dream, those of us left from the Galactica’s original crew nudged and urged and cajoled the developing societies on earth toward personhood theory, throughout history, remaining in suspended animation for long periods of time and then during times of rapid social or cultural change coming out to try to move things along in that direction.

There was, I recall very vividly, always that little bit of fear when you went under. You’d lie down in a dark chamber barely large enough to accommodate you, and as the lid hinged shut and you started feeling sleepy and disoriented, you’d always wonder in the back of your mond if something would go wrong and you’d end up going from suspended to dead. If things went well, the chamber would close, you’d pass out,and then an instant later it’d open up again and someone would help you out, and many decades would have gone by; but if things didn’t go well, there was always the possibility that the lid would shut and that’d be it.

During the Industrial Revolution, I recall working on a system to power what was left of the Galactica on fuel oil rather than tillium, which was a huge relief, because it meant not having to nursemaid every single watt of power all the time. After that was all over and society had moved to a post-industrial phase, I seemed to spend a lot of time on college campuses.


I’m not entirely sure if we succeeded or not, because I was yanked out of all this by the very sharp teeth of a very insistent cat who chose that moment to start biting my nose. I do think, though, that my answer has a bit more going for it than “God done it.”

Maybe that’d make an interesting premise for an ongoing series of science-fiction novels.

56 thoughts on “Dreaming of Transhumanism Remixes

  1. To knit pick:

    I haven’t dealt with this end of philosophy in a while, so I may be off a bit, but if I remember correctly, “Personhood Theory” is more of the field of study on what criteria makes something a “person”. The theory that “The ability to reason makes something a person” seems to be based off of Decartes’ postulate in “Discourse on the Method”, “Cognito Ergo Sum”, or, “I think, therefore I am”.

    This philosophy, like all philosophies, has some rather large moral holes in it: What about young children? Fetuses? Old people? Sleeping people? Drunk people? Comatose people? People in an altered state of consciousness? Senile people? Sociopaths? Orangutans and dolphins? Also, what is reason? There are many, many circumstances where exceptions to the rule must be made, and, of course, instances where the rule has been bent to the prejudices of the time.

    Yep. Putting off taking my “Family Values” Philosophy final.

    • Personhood theory in this context is a very specific construct of bioethics; I can’t speak to whether or not any other ethical system has ever gone by the same name.

      It’s applied to classes, not to individual states. Humans as a class can think and reason; ergo; individual human beings, who are members of the class of humans, are persons. The notion of personhood isn’t something that’s applied and revoked, applied and revoked, to a single individual; a sleeping human is still a member of the class of human beings and is still a person.

      A human corpse, on the other hand, is not; corpses can’t think and reason, and being a corpse, unlike being asleep, is not a transitory state.

      If you’d like to explore personhood theory in more detail, I strongly recommend the book CitizenCyborg, by James Hughes, which lays it out in more detail and at greater depth than what’s possible in one LJ reply.

  2. To knit pick:

    I haven’t dealt with this end of philosophy in a while, so I may be off a bit, but if I remember correctly, “Personhood Theory” is more of the field of study on what criteria makes something a “person”. The theory that “The ability to reason makes something a person” seems to be based off of Decartes’ postulate in “Discourse on the Method”, “Cognito Ergo Sum”, or, “I think, therefore I am”.

    This philosophy, like all philosophies, has some rather large moral holes in it: What about young children? Fetuses? Old people? Sleeping people? Drunk people? Comatose people? People in an altered state of consciousness? Senile people? Sociopaths? Orangutans and dolphins? Also, what is reason? There are many, many circumstances where exceptions to the rule must be made, and, of course, instances where the rule has been bent to the prejudices of the time.

    Yep. Putting off taking my “Family Values” Philosophy final.

  3. Personhood theory at its most simple is just the idea that if something reasons and thinks, it’s a person.

    Okay. My computer very emphatically can think and reason — better than I do. I certainly can’t beat it in a game of chess.

    Does that mean that I’m a murderer if I turn it off and don’t turn it on?

    • If you define “thinking and reasoning” as “playing chess.” However, the ability to play chess, as it turns out, actually has nothing to do with reason or thought. Chess can be mechanized as a series of heuristics applied to a game tree–and embarrassing as it is to be a human, things which mechanize the process do it better than things which think and reason.

      • You’ve tossed out heuristics applied to a (game) tree as thought?

        That’s probably thrown out a LOT of things people think of as thought: Planning a trip is usually done by heuristics applied to a (complex) tree.

        Mind if I try a different solution:

        “Thinking and reasoning” is not binary; it’s fuzzy and multidimensional.

        Some things vastly outpace us in some of those dimensions: Calculators might be fast at, well, calculation. They have no ability to learn or gather data on their own.

        Many chess-playing machines can learn from their mistakes; they have more ability to think and reason. But chess-playing programs are very specialized in their knowledge, and can’t apply their ideas to the real world.

        Something like Cyc comes closer to personhood.

        In my opinion, different people will reasonably differ on when machines have gained personhood, because they’ll have different definitions.

        Does that make sense?

        • You’ve tossed out heuristics applied to a (game) tree as thought?

          The heuristics used by a chess program are developed by a human, not by the chess-playing program itself. The chess-playing program is neither reflexive nor contemplative; it parses a tree, weighs the numerical value of each node on the tree according to a heuristic hard-coded in by the human programmer that wrote it, and then chooses the path through the tree that results in the greatest value. No more thoughtful than a pocket calculator, I’m afraid.

          When a program becomes reflexive and able to generate its own heuristics, then we’ll talk. ‘Til then, no, your computer neither thinks nor reasons, I’m afraid.

          • The heuristics used by a chess program are developed by a human, not by the chess-playing program itself. The chess-playing program is neither reflexive nor contemplative; it parses a tree, weighs the numerical value of each node on the tree according to a heuristic hard-coded in by the human programmer that wrote it, and then chooses the path through the tree that results in the greatest value. No more thoughtful than a pocket calculator, I’m afraid.

            The neurons of a human are developed by evolution, not by the human itself. The neurons of a human brain weigh the inputs they’re given according to a heuristic that has developed, and then chooses whether to fire or not based on its inputs. Much less thoughtful than a pocket calculator.

            When a program becomes reflexive and able to generate its own heuristics, then we’ll talk. ‘Til then, no, your computer neither thinks nor reasons, I’m afraid.

            But they can and do!

            I wrote an evolutionary algorithm to find good ways to discover whether a number is prime. The program(s) never knew what it/they was/were searching for; all that it/they knew was that it/they was/were given numbers and asked to say “yes” or “no”.

            After much searching, it/they found its/their own heuristics.

            Was it murder for me to shut off that program?

          • You’re certainly welcome to treat your computer as a person, if you so wish; don’t let me stop you. Me, I’ll wait ’til a computer displays the general sort of intellifence one typically associates with thought. 🙂

            A neuron behaves heuristically, but a neuron is not a person. I think you just made apoint that’s the opposite of the one you intended.

          • “Me, I’ll wait ’til a computer displays the general sort of intellifence one typically associates with thought.”

            Most people are pretty easy to fool using a Turing Test. and some just plain don’t care — they are creeped out at the sight of the piano playing robots, but they pour their soul’s troubles into Eliza.

          • There’s two things:

            First:
            A human neuron is not a person. Okay…

            According to Wikipedia*, there are (from 5 to 10) * 1010 neurons in the human brain. Let’s assume 10 * 1010.

            Let’s also assume that a human being with no working neurons does not have personhood.

            Assume that I’m an evil doctor. I operate on a normal human, who everyone agrees is a person.

            I remove one neuron, chosen at random from her brain.

            Is she still a person?

            What if I repeat the operation once?

            What if I repeat the operation 10 * 1010 times?

            From what I’ve read, you seem to think that personhood is an on or off property — either something is a person, or it is not. (Let me know if I am not correct in this assumption.) By that definition, eventually one operation, removing exactly one neuron, would shift her from having to not having ‘personhood’.

            I strongly believe the opposite: that personhood is ‘fuzzy’, that someone might be 99.5% person, or 0.005% person.** That’s all that I’m trying to get across.

            Second:
            You said before that something that develops its own heuristics has personhood.

            I gave an example of a large set of programs that do develop their own heuristics. (If you want, I’ll send you the Java source code.)

            Is it murder to stop that program from running?

            * I’m just using it for a number; any arbitrary number would do.

            ** Okay — I believe that personhood is a multiple of traits that interact.

          • First:

            With each neuron removed, you impact the brain’s ability to function. Eventually, you’ll reach the point where you’ve impacted the brain’s function so much the person will die or be cognitively crippled–a condition we call “brain death.” Dead humans aren’t people.

            Second:

            “You said before that something that develops its own heuristics has personhood.”

            Nope, I didn’t. I said that things which follow pre-programmed heuristics and don’t develop their own heuristics don’t have personhood–a whole ‘nother thing altogether.

          • First:

            With each neuron removed, you impact the brain’s ability to function. Eventually, you’ll reach the point where you’ve impacted the brain’s function so much the person will die or be cognitively crippled–a condition we call “brain death.” Dead humans aren’t people.

            Of course, that’s right. But here’s the question:

            Is it possible a mad scientist to stop one neuron before brain death? Is there some neuron that, before I take it out, the subject has personhood — and after I take it out, the subject does not have personhood?

            If that’s not possible, why not?

            Second:

            You said:
            When a program becomes reflexive and able to generate its own heuristics, then we’ll talk.

            There’s a large class of programs that generate their own heuristics. I mentioned evolutionary algorithms before. (New link.)

            Do they have personhood? If not, why not?

            Third:
            Let me know if this conversation is bugging you.

  4. Personhood theory at its most simple is just the idea that if something reasons and thinks, it’s a person.

    Okay. My computer very emphatically can think and reason — better than I do. I certainly can’t beat it in a game of chess.

    Does that mean that I’m a murderer if I turn it off and don’t turn it on?

  5. If you define “thinking and reasoning” as “playing chess.” However, the ability to play chess, as it turns out, actually has nothing to do with reason or thought. Chess can be mechanized as a series of heuristics applied to a game tree–and embarrassing as it is to be a human, things which mechanize the process do it better than things which think and reason.

  6. Personhood theory in this context is a very specific construct of bioethics; I can’t speak to whether or not any other ethical system has ever gone by the same name.

    It’s applied to classes, not to individual states. Humans as a class can think and reason; ergo; individual human beings, who are members of the class of humans, are persons. The notion of personhood isn’t something that’s applied and revoked, applied and revoked, to a single individual; a sleeping human is still a member of the class of human beings and is still a person.

    A human corpse, on the other hand, is not; corpses can’t think and reason, and being a corpse, unlike being asleep, is not a transitory state.

    If you’d like to explore personhood theory in more detail, I strongly recommend the book CitizenCyborg, by James Hughes, which lays it out in more detail and at greater depth than what’s possible in one LJ reply.

  7. I don’t really understand why this concept is so threatening to so many people. The idea that something that is *different* from me (aliens, robots, animals) could somehow also be *equal* to me (in terms of rights or sentience) doesn’t frighten me at all.

    I don’t quite grasp why some people hold onto this idea of “humanity” as the end-all-be-all, that we must halt our own progress because then we won’t be “human” anymore, that we must exclude all others who are not “human”.

    I do understand that change is frightening … I’ve been known to resist the unknown a time or two myself. But I don’t understand the idea that any change at all is automatically a negative and therefore must be stopped.

    We can’t improve our hearing beyond our “natural” abilities because then we wouldn’t be “natural” or “human” anymore. What about the fact that our brains are “natural” and its our brains that gave us the ability to develop the technology to improve ourselves in the first place? We’re not circumventing evolution or “God’s Plan”, we are acting on it by utilizing the big thinking, reasoning brains that evolution (or “God’s Plan”) gave us.

    This doesn’t give us license to start developing technology completely willy-nilly to the detriment of people, or the planet, or animals, or whatever. Another unique quality of our brains is that we alone, of all of “nature”, can *choose* our actions based on the consequences, even to the detriment of our own selves or species (your previous post on that subject very eloquently spoke on that topic).

    So I think that, because of this trait, the idea that we might somehow change what a “human” is because of technology, not because we started growing tails or better eyes, shouldn’t be a frightening thought. Because we *can* self-regulate, which is something that no other organism in nature has the ability to do. And that makes the future, to me, full of potential, not full of fear.

  8. I don’t really understand why this concept is so threatening to so many people. The idea that something that is *different* from me (aliens, robots, animals) could somehow also be *equal* to me (in terms of rights or sentience) doesn’t frighten me at all.

    I don’t quite grasp why some people hold onto this idea of “humanity” as the end-all-be-all, that we must halt our own progress because then we won’t be “human” anymore, that we must exclude all others who are not “human”.

    I do understand that change is frightening … I’ve been known to resist the unknown a time or two myself. But I don’t understand the idea that any change at all is automatically a negative and therefore must be stopped.

    We can’t improve our hearing beyond our “natural” abilities because then we wouldn’t be “natural” or “human” anymore. What about the fact that our brains are “natural” and its our brains that gave us the ability to develop the technology to improve ourselves in the first place? We’re not circumventing evolution or “God’s Plan”, we are acting on it by utilizing the big thinking, reasoning brains that evolution (or “God’s Plan”) gave us.

    This doesn’t give us license to start developing technology completely willy-nilly to the detriment of people, or the planet, or animals, or whatever. Another unique quality of our brains is that we alone, of all of “nature”, can *choose* our actions based on the consequences, even to the detriment of our own selves or species (your previous post on that subject very eloquently spoke on that topic).

    So I think that, because of this trait, the idea that we might somehow change what a “human” is because of technology, not because we started growing tails or better eyes, shouldn’t be a frightening thought. Because we *can* self-regulate, which is something that no other organism in nature has the ability to do. And that makes the future, to me, full of potential, not full of fear.

  9. You’ve tossed out heuristics applied to a (game) tree as thought?

    That’s probably thrown out a LOT of things people think of as thought: Planning a trip is usually done by heuristics applied to a (complex) tree.

    Mind if I try a different solution:

    “Thinking and reasoning” is not binary; it’s fuzzy and multidimensional.

    Some things vastly outpace us in some of those dimensions: Calculators might be fast at, well, calculation. They have no ability to learn or gather data on their own.

    Many chess-playing machines can learn from their mistakes; they have more ability to think and reason. But chess-playing programs are very specialized in their knowledge, and can’t apply their ideas to the real world.

    Something like Cyc comes closer to personhood.

    In my opinion, different people will reasonably differ on when machines have gained personhood, because they’ll have different definitions.

    Does that make sense?

  10. You’ve tossed out heuristics applied to a (game) tree as thought?

    The heuristics used by a chess program are developed by a human, not by the chess-playing program itself. The chess-playing program is neither reflexive nor contemplative; it parses a tree, weighs the numerical value of each node on the tree according to a heuristic hard-coded in by the human programmer that wrote it, and then chooses the path through the tree that results in the greatest value. No more thoughtful than a pocket calculator, I’m afraid.

    When a program becomes reflexive and able to generate its own heuristics, then we’ll talk. ‘Til then, no, your computer neither thinks nor reasons, I’m afraid.

  11. I was hungry to find out why the Cylons came back to torment the humans in the first place. In my version, the Cylons wanted access to their own records, all the technology build-up that led to their own creation. When they’d analyzed everything they could and still couldn’t figure out what makes them tick, they start experimenting on the humans, like Nazi doctors.

    The human/cylon hybrid was sort-of played with from both angles: Hera having a human mother, and the Base Ship hybrid having the Cylon mom. But the writers just couldn’t figure out what was important about these figures, and left it for a few hundred millenia for another tech civ to tangle with.

    Any version of earth where they settle down and re-form new caprica, would have been unsatisfying.

    Maybe if they’d gotten a land shark candygram from the rest of the cylons saying “all is forgiven, please come home” and settled down to live as equals? I dunno, the series was painted into a corner the moment earth turned out to be a burned out cinder. they could have ended it there and been no more disappointing. At least it would have made some kind of sense…

  12. I was hungry to find out why the Cylons came back to torment the humans in the first place. In my version, the Cylons wanted access to their own records, all the technology build-up that led to their own creation. When they’d analyzed everything they could and still couldn’t figure out what makes them tick, they start experimenting on the humans, like Nazi doctors.

    The human/cylon hybrid was sort-of played with from both angles: Hera having a human mother, and the Base Ship hybrid having the Cylon mom. But the writers just couldn’t figure out what was important about these figures, and left it for a few hundred millenia for another tech civ to tangle with.

    Any version of earth where they settle down and re-form new caprica, would have been unsatisfying.

    Maybe if they’d gotten a land shark candygram from the rest of the cylons saying “all is forgiven, please come home” and settled down to live as equals? I dunno, the series was painted into a corner the moment earth turned out to be a burned out cinder. they could have ended it there and been no more disappointing. At least it would have made some kind of sense…

  13. The heuristics used by a chess program are developed by a human, not by the chess-playing program itself. The chess-playing program is neither reflexive nor contemplative; it parses a tree, weighs the numerical value of each node on the tree according to a heuristic hard-coded in by the human programmer that wrote it, and then chooses the path through the tree that results in the greatest value. No more thoughtful than a pocket calculator, I’m afraid.

    The neurons of a human are developed by evolution, not by the human itself. The neurons of a human brain weigh the inputs they’re given according to a heuristic that has developed, and then chooses whether to fire or not based on its inputs. Much less thoughtful than a pocket calculator.

    When a program becomes reflexive and able to generate its own heuristics, then we’ll talk. ‘Til then, no, your computer neither thinks nor reasons, I’m afraid.

    But they can and do!

    I wrote an evolutionary algorithm to find good ways to discover whether a number is prime. The program(s) never knew what it/they was/were searching for; all that it/they knew was that it/they was/were given numbers and asked to say “yes” or “no”.

    After much searching, it/they found its/their own heuristics.

    Was it murder for me to shut off that program?

  14. I was disappointed, as well, by the way they wrapped things up, but not entirely surprised. Early on, BSG made it clear that AI was a very bad idea. To their credit, they did attempt to make a case for the rights of intelligent machines, but even then, they did so reluctantly, as though there was little choice, now that the “things” had been made.

    I’ve also wondered why Star Trek decided to ignore the probable future applications of life-extension research. Realistically, they should have eliminated aging and death. But, like BSG, they’re afraid of the big, bad “brave new world.” The BORG are dangerous automatons. Nanotechnology cannot be safely harnessed for use in and by humans (I applaud Stargate for the great technology they positively incorporate into their reality—notably not the future but present day). People still deal with all the consequences of getting old—diminished mental functioning, aching joints, wrinkles, sluggishness, grey hair, etc. But, again, as with BSG, it’s not surprising. Gene Roddenberry made it clear, early on, what he did and didn’t approve of. He saw any efforts to create AI impractical and rife with problems. This is why Data was forever doomed to struggle towards more fully realized “humanity” and never quite reach it. It’s a lesson. And, sadly, most people get it.

  15. I was disappointed, as well, by the way they wrapped things up, but not entirely surprised. Early on, BSG made it clear that AI was a very bad idea. To their credit, they did attempt to make a case for the rights of intelligent machines, but even then, they did so reluctantly, as though there was little choice, now that the “things” had been made.

    I’ve also wondered why Star Trek decided to ignore the probable future applications of life-extension research. Realistically, they should have eliminated aging and death. But, like BSG, they’re afraid of the big, bad “brave new world.” The BORG are dangerous automatons. Nanotechnology cannot be safely harnessed for use in and by humans (I applaud Stargate for the great technology they positively incorporate into their reality—notably not the future but present day). People still deal with all the consequences of getting old—diminished mental functioning, aching joints, wrinkles, sluggishness, grey hair, etc. But, again, as with BSG, it’s not surprising. Gene Roddenberry made it clear, early on, what he did and didn’t approve of. He saw any efforts to create AI impractical and rife with problems. This is why Data was forever doomed to struggle towards more fully realized “humanity” and never quite reach it. It’s a lesson. And, sadly, most people get it.

  16. You’re certainly welcome to treat your computer as a person, if you so wish; don’t let me stop you. Me, I’ll wait ’til a computer displays the general sort of intellifence one typically associates with thought. 🙂

    A neuron behaves heuristically, but a neuron is not a person. I think you just made apoint that’s the opposite of the one you intended.

  17. Funny how this maps to a recent Charlie Stross blogging at Tor, and where he gets his ideas from (summary: ideas are easy; using them…)

    The idea of something hidden nudging and shaping mankind’s development isn’t new; it could range from Asimov’s Eternals (The End Of Eternity) or Second Foundation, through to a single immortal (my memory is telling me Zelazny did something in this vein; heck, even Highlander – there was only one – hinted at this).

    Similarly the idea of “personhood” is common in SciFi. A number of authors work around the emotional attachment involved in “human” or “person” by using the phrase “sapient” or similar; a good example would be Brin’s Uplift series. Heh, the classic James White “Hospital Station” books even jokes that every race considers itself “human”, and that’s why the doctors require the patients physiological taxonomy.

    Something I do find under-examined in fiction is the “trust” aspect; whether it’s suspended animation or a stasis field or just plain simple anesthesia, you’re trusting someone else (or Murphy!) with your existence.

    So ideas… easy; it’s what you do with them that makes a good story!

  18. Funny how this maps to a recent Charlie Stross blogging at Tor, and where he gets his ideas from (summary: ideas are easy; using them…)

    The idea of something hidden nudging and shaping mankind’s development isn’t new; it could range from Asimov’s Eternals (The End Of Eternity) or Second Foundation, through to a single immortal (my memory is telling me Zelazny did something in this vein; heck, even Highlander – there was only one – hinted at this).

    Similarly the idea of “personhood” is common in SciFi. A number of authors work around the emotional attachment involved in “human” or “person” by using the phrase “sapient” or similar; a good example would be Brin’s Uplift series. Heh, the classic James White “Hospital Station” books even jokes that every race considers itself “human”, and that’s why the doctors require the patients physiological taxonomy.

    Something I do find under-examined in fiction is the “trust” aspect; whether it’s suspended animation or a stasis field or just plain simple anesthesia, you’re trusting someone else (or Murphy!) with your existence.

    So ideas… easy; it’s what you do with them that makes a good story!

  19. Personhood theory contraindicates the existence of a unique immortal soul that allows people to treat animals, things, and other people like shit. I suspect people are loathe to give up such a fabulous tool that has its primary use in making people feel better by denigrating everything else that can’t easily defend itself.

    So, you’re kinda fighting against a huge tradition.

    If we can really, REALLY convince people that they are okay just the way they are, that might be step one. This allows them to (if they wish) avoid belief structures which depend on inherent imperfect/flaw that must be corrected. Without this need to be inhuman, they’ll be less motivated to PRETEND they’re better by degrading all around them, and that opens the door to the attitude that there can be persons other than them.

    Naturally, this isn’t the ONLY solution, but it’s a helluva hill that’s going to need to be climbed before people are willing to buy into the idea that they are pretty much as unique and wonderful and brilliant as, well, anything else.

    Maybe you might have luck starting out with a primitive Animist tribe and just killing anyone who opposes them. At least they’d start off on the right foot.

    • has its primary use in making people feel better

      Actually I think its primary use is to make people feel immortal.. and that’s even harder to give up. The frustrating thing is that in clinging to some (most likely false) sense of immortality, they end up fighting against technologies which could give them a real kind of immortality.

      • I don’t see actual evidence of this. I mean, first of all, I’d guess that 90% of the people that subscribe to this actually don’t believe it — they’re only in it for the short-term goal of feeling better.

        Secondly, “immortality” is too ephemeral to people. “Feeling good” is something you can get a handle on. As evidence of this I present practically every piece of fiction ever written about how immortality sucks (millions) versus pieces of fiction about how wonderful immortality is (mmmmmmaybe two).

        They CLAIM it’s about immortality, though, which, as you’ve pointed out fails once they’re faced with REAL chances at immortality. Once faced with REAL chances at immortality, rather than embrace it, they reject it — because it wasn’t ACTUALLY immortality they sought in the first place.

        People who ACTUALLY seek immortality are working hard to make it happen.

        • You might be right, as this is largely a projection on my part. When I believed I had a soul that would be reincarnated it was a direct cushion against the void. I really can’t imagine why having a soul would be particularly interesting otherwise. But then, I don’t think my perspective is exactly mainstream.

  20. Personhood theory contraindicates the existence of a unique immortal soul that allows people to treat animals, things, and other people like shit. I suspect people are loathe to give up such a fabulous tool that has its primary use in making people feel better by denigrating everything else that can’t easily defend itself.

    So, you’re kinda fighting against a huge tradition.

    If we can really, REALLY convince people that they are okay just the way they are, that might be step one. This allows them to (if they wish) avoid belief structures which depend on inherent imperfect/flaw that must be corrected. Without this need to be inhuman, they’ll be less motivated to PRETEND they’re better by degrading all around them, and that opens the door to the attitude that there can be persons other than them.

    Naturally, this isn’t the ONLY solution, but it’s a helluva hill that’s going to need to be climbed before people are willing to buy into the idea that they are pretty much as unique and wonderful and brilliant as, well, anything else.

    Maybe you might have luck starting out with a primitive Animist tribe and just killing anyone who opposes them. At least they’d start off on the right foot.

  21. “Me, I’ll wait ’til a computer displays the general sort of intellifence one typically associates with thought.”

    Most people are pretty easy to fool using a Turing Test. and some just plain don’t care — they are creeped out at the sight of the piano playing robots, but they pour their soul’s troubles into Eliza.

  22. i too found the ending extremely disappointing and annoying for similar reasons. it also didn’t seem remotely plausible. few of the characters behaved in ways that were in keeping with their personality. in particular it seemed very implausible that after having spent so long struggling to survive, form community, build bonds, etc. that everyone would say “oh, let’s all split into a bunch of tiny groups and scatter ourselves around the globe and never see each other again!” not to mention that most of them were not survivalists/huntergatherers, so saying that it made sense because they’d be more likely to survive also seems questionable. i don’t know how to survive in the wilderness so i would sure as hell want the largest pack of people possible to travel with

  23. i too found the ending extremely disappointing and annoying for similar reasons. it also didn’t seem remotely plausible. few of the characters behaved in ways that were in keeping with their personality. in particular it seemed very implausible that after having spent so long struggling to survive, form community, build bonds, etc. that everyone would say “oh, let’s all split into a bunch of tiny groups and scatter ourselves around the globe and never see each other again!” not to mention that most of them were not survivalists/huntergatherers, so saying that it made sense because they’d be more likely to survive also seems questionable. i don’t know how to survive in the wilderness so i would sure as hell want the largest pack of people possible to travel with

  24. has its primary use in making people feel better

    Actually I think its primary use is to make people feel immortal.. and that’s even harder to give up. The frustrating thing is that in clinging to some (most likely false) sense of immortality, they end up fighting against technologies which could give them a real kind of immortality.

  25. There’s two things:

    First:
    A human neuron is not a person. Okay…

    According to Wikipedia*, there are (from 5 to 10) * 1010 neurons in the human brain. Let’s assume 10 * 1010.

    Let’s also assume that a human being with no working neurons does not have personhood.

    Assume that I’m an evil doctor. I operate on a normal human, who everyone agrees is a person.

    I remove one neuron, chosen at random from her brain.

    Is she still a person?

    What if I repeat the operation once?

    What if I repeat the operation 10 * 1010 times?

    From what I’ve read, you seem to think that personhood is an on or off property — either something is a person, or it is not. (Let me know if I am not correct in this assumption.) By that definition, eventually one operation, removing exactly one neuron, would shift her from having to not having ‘personhood’.

    I strongly believe the opposite: that personhood is ‘fuzzy’, that someone might be 99.5% person, or 0.005% person.** That’s all that I’m trying to get across.

    Second:
    You said before that something that develops its own heuristics has personhood.

    I gave an example of a large set of programs that do develop their own heuristics. (If you want, I’ll send you the Java source code.)

    Is it murder to stop that program from running?

    * I’m just using it for a number; any arbitrary number would do.

    ** Okay — I believe that personhood is a multiple of traits that interact.

  26. I don’t see actual evidence of this. I mean, first of all, I’d guess that 90% of the people that subscribe to this actually don’t believe it — they’re only in it for the short-term goal of feeling better.

    Secondly, “immortality” is too ephemeral to people. “Feeling good” is something you can get a handle on. As evidence of this I present practically every piece of fiction ever written about how immortality sucks (millions) versus pieces of fiction about how wonderful immortality is (mmmmmmaybe two).

    They CLAIM it’s about immortality, though, which, as you’ve pointed out fails once they’re faced with REAL chances at immortality. Once faced with REAL chances at immortality, rather than embrace it, they reject it — because it wasn’t ACTUALLY immortality they sought in the first place.

    People who ACTUALLY seek immortality are working hard to make it happen.

  27. You might be right, as this is largely a projection on my part. When I believed I had a soul that would be reincarnated it was a direct cushion against the void. I really can’t imagine why having a soul would be particularly interesting otherwise. But then, I don’t think my perspective is exactly mainstream.

  28. First:

    With each neuron removed, you impact the brain’s ability to function. Eventually, you’ll reach the point where you’ve impacted the brain’s function so much the person will die or be cognitively crippled–a condition we call “brain death.” Dead humans aren’t people.

    Second:

    “You said before that something that develops its own heuristics has personhood.”

    Nope, I didn’t. I said that things which follow pre-programmed heuristics and don’t develop their own heuristics don’t have personhood–a whole ‘nother thing altogether.

  29. First:

    With each neuron removed, you impact the brain’s ability to function. Eventually, you’ll reach the point where you’ve impacted the brain’s function so much the person will die or be cognitively crippled–a condition we call “brain death.” Dead humans aren’t people.

    Of course, that’s right. But here’s the question:

    Is it possible a mad scientist to stop one neuron before brain death? Is there some neuron that, before I take it out, the subject has personhood — and after I take it out, the subject does not have personhood?

    If that’s not possible, why not?

    Second:

    You said:
    When a program becomes reflexive and able to generate its own heuristics, then we’ll talk.

    There’s a large class of programs that generate their own heuristics. I mentioned evolutionary algorithms before. (New link.)

    Do they have personhood? If not, why not?

    Third:
    Let me know if this conversation is bugging you.

  30. As much as I was eye-rollingly disappointed by the ending (IMHO, if you make “Hand of God” the explanation, it’s fantasy, not sci-fi, I don’t care *how* much advanced tech you have), I did have sort of a bwa-ha thought as it wrapped up: “Well, instead of the usual Deus ex machina ending, it was a Machin(*) ex De(*)” ending. (Insert appropriate declensions where the asterisks are; I’ve forgotten my Latin.)

  31. As much as I was eye-rollingly disappointed by the ending (IMHO, if you make “Hand of God” the explanation, it’s fantasy, not sci-fi, I don’t care *how* much advanced tech you have), I did have sort of a bwa-ha thought as it wrapped up: “Well, instead of the usual Deus ex machina ending, it was a Machin(*) ex De(*)” ending. (Insert appropriate declensions where the asterisks are; I’ve forgotten my Latin.)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.