Some thoughts on rights, humanity, and what it means to be a person

On another forum I read, a conversation has arisen about whether or not people have “rights,” and what it means to have “rights.” Like many Americans, I believe that people do, simply as a consequence of being people, have certain inalienable rights; and among these are the right to life, liberty, self-determinism, and to believe and express as they desire so long as they do not infringe on these same rights in others. I believe these rights are immutable; that they are a consequence of being a person, and are not granted by the state or by any other entity or power; and that a state or other entity can take them away, but not grant them.

But do I believe these things simply because I’m an American, and I’ve been brainwashed into believing them? Well, no.


There’s no question that a person’s social, political, and moral ideas and values are socially informed, and that people can and do absorb many of those ideas from the society around them.

I wouldn’t go so far as to say that I believe what I do because I’ve been “brainwashed’ to believe them, however. There are many cultural and social values held by a great many Americans which are just as firmly inculcated into people here which I reject; evidence suggests that cultural brainwashing doesn’t work too well on me. 🙂

More to the point, a person who holds ideas about rights simply because he has been told that “rights are good” probably is unlikely to think too deeply about the implications of those rights; a person who is simply repeating American cultural ideas about innate human rights is unlikely to, for example, see the contradiction between those values and the idea that it is OK to tell gays and lesbians that they cannot marry.

In fact, I think these ideas have often been enshrined in America more as vague theories than as matters of political and social reality. Even the very people who first articulated these ideas as a framework for American society did not really believe them, or at least did not follow their own arguments through to their logical conclusions; Thomas Jefferson, who believed that “all men are created equal, endowed by their Creator with certain inalienable rights,” kept slaves.

When you do sincerely hold to these beliefs, and you do follow them through to their logical conclusion, which I do, you end up in territory that diverges radically from the reality of American society, and makes a lot of people uncomfortable. I’ll get to that in a minute, but for right now, suffice to say that these ideas are held by Americans only in an abstract theoretical way, rather than as a matter of real truth.

I do believe that a great many Americans do simply parrot back what their civics teacher told them about “rights” without thinking through what that means or what the implications of those beliefs are. I don’t think I’m one of them, and let me tell you why…

Traditionally, the rest of the world has not accepted the American view of rights as an inalienable and irrevokable consequence of being a person. One person on the list I belong to wrote, for example:

Here’s how the English tradition views “rights”.

Rights are made up. They are not physically real. They exist at the same level of reality as “marriage”, “soul mates”, “demonic possession”,
“Friday the 13th” and “Christmas”. To define what that level of reality means, they exist ONLY in conversation, and their impact on physical reality shows up ONLY as patterns of behaviour exhibited by people who believe in them.

I can’t speak to whether or not this is an accurate representation of the English position on basic rights, not being familiar with English culture. I will say that in a philosophical sense, this is correct. The notion that people have rights is not a truth in the same sense that gravity or the weak electromagnetic force is a truth; it is not subject to empirical measurement, and it is not an immutable force by which the universe operates. The notion of rights is fundamentally a moral issue, and its effect on reality is a consequence only of the fact that people who hold these ideas tend to behave in a certain way.

However, there are practical consequences to holding forth the idea that “all people have certain rights to which they are entitled simply as a consequence of being people.”


Fundamentally, I’m a pragmatist. I’m not spiritual at all, and I am not interested in the notion that rights are divinely ordained, or that God said we should behave thus-and-such a way, or that it is in our spiritual best interest to do thus-and-such, or whatever. I hold these ideas because there are practical consequences to them, and practical consequences to not holding them.

I believe that a society which enshrines the notion that certain rights belong by entitlement to all people is better than a society that does not, and I believe that “better” in this case can be demonstrated practically as well as morally.

There’s a very easy demonstration of this idea, which I like to use as an example whenever people talk about the notion that the rights of self-determination are not an intrinsic part of being a person or that it is acceptable for one group of people, for whatever reason, to remove this self-determinism from another group.

It used to be that American society held the idea that blacks are inherently inferior to whites. So much so that for the first part of American history, we actually believed that blacks were not even people at all, but were property, no different from any other beasts of burden. Even after we got past that idea, many people still held (and some people still hold today) that blacks are not as smart as whites, and so educating blacks is a waste of effort; why do it since they lack the cognitive tools to benefit from it?

Yet the first person to pioneer open-heart surgery was black.

Had this person not been able to go to medical school, had he been prevented (as many other people were) from pursuing a career in medicine on the theory that blacks simply were not intelligent enough for it, then the pioneering contributions this man made–contributions which benefitted all of society–would have been denied us, or at least delayed. (Sure, had he not done this, someone else would probably have done so, eventually–but there’s no way to know how much time would have been lost.)

When a society decides arbitrarily that one particular group of people are, for whatever reason, not entitled to the basic rights of self-determinism, then that society denies itself the benefits that members of that group can contribute. These benefits are not theoretical; like open-heart surgery, they are tangible. A society which denies the notion of rights to some of its members suffrs a quantifiable loss as a result.


I think it’s safe to say that people in America are often quite uniform about giving lip service to the concept of fundamental rights, but don’t really think about them much, and don’t allow that lip service to get in the way of their own particular prejudices. While Americans talk the talk, the reality of American history is that securing these rights has required, so far, at least two major civil rights movements, one war, and countless amounts of bloodshed. And we’re still not all the way there yet.

This is a problem, both for moral and for practical reasons. The abstract, philosophical notion of “rights” has very real and practical benefits, as I’ve mentioned already.

There’s also the flip side to that, which is the moral value of rights. Morally, the notion of rights is predicated on mutual empathy and recognition of ourselves in others. A person who thinks it is acceptable for him to deprive others of their rights must, if he follows his own argument to its end, must accept that it is OK for someone stronger than he to do the same to him. (I’ve been watching the Saddam Hussein trial… Can you believe the chutzpah of this guy? He’s complaining that the trial is violating his rights!)

The practical consequences for believing in the idea of intrinsic rights are manyfold; they create a society that can benefit from the contributions of all its members, and they create a framework of enlightened self-interest which says that ultimately, when I respect the rights of others, I am respecting my own rights as well–as anything I can do to others which deprives them of these rights, someone else can do to me.

These are not platitudes or vague woo-woo ideas; they have real-world, nuts-and-bolts, practical ramifications. Ultimately, at the end of the day, I believe that holding these ideas about rights for everyone improves my life in real and practical ways.


Ideas, even abstract ideas, have power. The more people who share those ideas, the more power they have. When you accept the idea that people do have rights and that those rights are real and must be respected, you join forces with everyone else who shares those ideas. When a society embraces those ideas, that society becomes stronger.

It is no accident that if you look at world history, you consistently see, ove and over again, the dominant and most powerful nations on earth are the ones who are the most progressive. The Roman empire was the first to extend the notion of “citizenship” to the inhabitants of those countries they conquered; Rome was imperialistic and militaristic, but part of what allowed them to consolidate and stabilize their empire was their doctrine which said that any person in a Roman-occupied territory who had never taken up arms against the Roman army was a Roman citizen, with all the rights and privileges therein. During the Middle Ages, when the Western world was mired in backward, reactionary religious ideology, the dominant force on the planet was the Persian Empire–the first empire ever to codify the notion of “rights” into law, and one of the first examples of an empire governed by a meritocracy.

The idea of rights is a powerful one. Societies which embrace it gain practical benefits from it, which gives those societies a competitive advantage over societies which do not. In about a hundred and fifty years, America rose from a backwards colony with no technological infrastructure to a world superpower…in part because of an abundance of natural resources, to be sure, but also in part because it embraced self-determinism, and self-determinism, though it may be an abstract value, creates prosperity.


America is hardly faultless. In fact, sometimes this country falls from its ideas in ways that are flat-out embarrassing, particularly under the current Administration. Nonetheless, the fact remains that enshrining these rights is one of the things that have led this country to the poisition of being the world’s only superpower. These rights not only make the lives of the people who hold them better, in ways that cannot be quantified (self-determining people are happier than slaves, for example), but also creates societies which are more prosperous and more powerful.

The place where it gets tricky is in understanding what is meant by a “person,” and this is where America has historically fallen down flat in enshrining the notion that people have rights. This is where I also diverge radically from common and contemporary thought about what it means to have rights, and leads into an area of moral theory that bioethicists call “personhood theory.”

Early in American history, many people advanced the notion that Negroes belonged to a different species than Caucasians. This is, of course, a ridiculous and kind of asinine assertion to make; the fact that blacks and whites can have sex and make viable babies is proof that we are all the same species *by definition*. But the argument was made nonetheless, because it needed to be made in order to justify the institution of slavery. The alternative, that blacks and whites are both people, leads to a contradiction; if we are all people, and all people have rights, slavery becomes indefensible.

One of the core problems that people face when they begin talking about “rights” is in separating entities which have rights from entities which do not. Even the most hardcore advocate of the notion of rights would not say that an automobile or a rock has rights; some people believe that animals have rights, though stop short of saying that these rights are the same as the rights invested in people (not even PETA says that hamsters should be allowed to vote, for example!).

Now hold on to your hat, Dorothy, because Kansas is about to go bye-bye…


Personhood theory says that rights exist in any entity which has self-awareness and the intellectual capacity to understand itself, the implications and consequences of its actions, and the ability to make informed and moral choices about those actions. To date, that means human beings, and it’s no accident that people talk about “human rights” when they talk about rights.

But there is no reason to believe that this will always mean human beings.

One of the implications of personhood theory is that any entity which meets these criteria has rights. That means, for example, that if we were to unearth some species of sapient being other than homo sapiens in some far-off, unexplored forest, that species would have rights. It means that if we ever create a true, “strong” aritficial intelligence, that AI has rights. It means that a sapient alien species, if such a thing exists, has rights. It means that a person’s consciousness and sense of self transferred into another form, such as a robot or a computer (if such a thing is possible), has rights. It means that a person’s brain, preserved alive in some way in a non-human body, has rights.

As a society, we have been very, very slow to recognize that a person whose skin is a different color than ours has the same rights and entitlements as a person whose skin is the same color. The notion that it is possible for something that does not look like a human being at all to have rights does not sit well with the vast majority of people–even people who *do* embrace the idea of “human rights.”

Is this a theoretical issue? For now, yes it is. Will it always be? Almost certainly not.


In the book The Age of Spiritual Machines, author Ray Kurtzweil argues that at the current rate at which computers are being developed, with a doubling in computational power every eighteen months, we will reach the point at which computes will be smarter than people in about twenty years. Of course, this makes a lot of assumptions about current trends continuing, but I would not be surprised if it’s far off the mark. Right now, at this very moment, the IBM Blue Gene supercomputer has about the same raw processing power as a human brain. The way it’s wired is very, very different from the way a brain is wired, of course, and it’s optimized for an entirely different kind of information processing; I am not making the argument that a Blue Gene computer is sapient, not by a long shot.

But consider this. right now, as I sit typing this very long message, a group of Swiss researchers is using a Blue Gene supercomputer to attempt to build a map, neuron for neuron and connection for connection, of a human brain. Now, if they succeed, what happens if they make that model dynamic? What happens if they model the individual physical and chemical properties of each neuron, and then let the simulation run? Hypothetically, the model will behave and react exactly like the person the model was made from. If it says it is sapient and self aware, is it? I have only your word that you are sapient and self-aware, after all…

Personhood theory says this model would then be a person, with all the rights and entitlements that belong to you or I or any other person.

This is one of the central ideas behind transhumanism–the idea that a “person” can take many forms, and that it is possible for us to model or otherwise change ourselves in ways that make us strikingly different from humans as we have always thought about them, yet still retain this *personhood*. Transhumanists embrace not only the idea that people have rights, for both practical and moral reasons, but that these rights pertain *to any person*, regardless of its form–not just to human beings as human beings have always existed.

These are issues we will probably have to start dealing with in our lifetimes. They sound like science fiction right now, until you start doing the research and finding out just how much is being done in the fields of neurology, neurophysiology, biomedical nanotechnology, and computer science. Shelly’s other boyfriend Ted is is planning to return to school seeking a postgraduate degree in neurobiology with an eye toward making computer models of living human brains; this stuff is very much for real. You think people had problems accepting the idea that blacks are people–you have no idea!


There’s a very good book, Citizen Cyborg, that discusses the moral and ethical ramifications of personhood theory in more detail. I strongly recommend that anyone with any interest whatsoever in the idea of “rights” read it. I know the author, James Hughes; I had dinner with him a few weeks back, and he’s really on the ball. He began getting involved in exploring transhumanism and personhood theory when he was doing a dissertation on the cultural differences between Westerners and Japanese with regard to things like “human rights” and the assumptions that people make about what rights they are entitled to; in Japan, for example, they do not have the moral problems we do with disconnecting brain-dead patients from life support, but at the same time they have different social issues surrounding transplants and organ and tissue donation, for reasons that are cultural and reveal social beliefs about personhood.

I believe that rights exist. I believe that the belief that rights exist is a powerful idea, with both moral and practical consequences for the societies that accept or reject these beliefs. And I believe that in my lifetime, these beliefs will be tested in ways which, until now, we have never anticipated.

I hope we make the right choices. It is impossible to overestimate the importance and the value attached to our ideas about personhood and about rights.

24 thoughts on “Some thoughts on rights, humanity, and what it means to be a person

  1. Fascinating stuff

    Good topic. In the interest of not turning the thread into an off topic dust up about American exceptionalism and by what mechanisms the American domination of the world has come about I will stick with the personhood theme.

    It is very interesting to think along the lines you have started to explore. Imagine a situation where an AI is imprinted with a duplicate of an individual’s brain structure. Or maybe several, or a hundred. Maybe technology also would exist to place the “brains” into humanoid constructs.

    Are all of these 100 identical entities possessed of full personhood? Even if they are all identical in terms of mental make-up? So is one person now 100 people?

    A book I read recently, David Brin’s Kiln People has a take on these questions that is thought provoking and very entertaining. I recommend it.

  2. Fascinating stuff

    Good topic. In the interest of not turning the thread into an off topic dust up about American exceptionalism and by what mechanisms the American domination of the world has come about I will stick with the personhood theme.

    It is very interesting to think along the lines you have started to explore. Imagine a situation where an AI is imprinted with a duplicate of an individual’s brain structure. Or maybe several, or a hundred. Maybe technology also would exist to place the “brains” into humanoid constructs.

    Are all of these 100 identical entities possessed of full personhood? Even if they are all identical in terms of mental make-up? So is one person now 100 people?

    A book I read recently, David Brin’s Kiln People has a take on these questions that is thought provoking and very entertaining. I recommend it.

  3. What are “rights?”

    You used voting as an example, so I will continue with that. I don’t believe everyone has (or should have) a “right” to vote. It’s not important. I don’t believe that a “right” to vote is a natural right. Voting is a political right, and is not fundamental to being a person.

    Why do I believe voting isn’t a natural right or even fundamental?

    Because I have the natural right to be free from aggression and invasion — including the invasion of a democratic vote. At no time can 51% of a group of people elect to take away my life or property — no matter how formal it is, it would still be murder or slavery and theft. Nobody has the right to murder, enslave, or steal — not even government. This natural right to be free from aggression and invasion is fundamental, because nobody has a right to force me to vote away my life or property, I always retain the right to choose, the right to say “no,” and the right to defend my life or property when being invaded after the vote anyway.

    If you are protected in your right to be free from aggression and invasion, and all rights are equally held by all other men, women, persons, individuals, or autonomous thinkers, then they all have the same right to be free from aggression and invasion. By default, I, and everyone else, is not left with the right to invade anyone else.

    Indeed, one way to protect rights is to envision your own rights (freedom from aggression and invasion) being violated. That would not be greatly appreciated. So, as the (negative) golden rule goes: do not unto others as you would not have done unto you. But most importantly, another way to protect rights is to consider the right to be free from aggression and invasion as the axiom to which all other rights and morals are derived.

    All other “rights” — the right to vote, the right to choose (abortion), the right to “free” healthcare (ha!), etc. — are non-fundamental, non-natural, and not even vaguely important in a society that respects the right of freedom from aggression and invasion. All legitimate rights, actions, morals, etc. extend from that axiom; if they are in violation of that axiom, they are not rights, they are immoral actions, and they are subject to ostracism.

    If one asserts that they are a person with rights, by all means, I must respect that and leave them alone; I do not have the right to vote away that person’s rights (vote to aggress or invade that person) because I do not have the right to aggress or invade another person — even if that person is just a brain in a machine or the likes.

  4. What are “rights?”

    You used voting as an example, so I will continue with that. I don’t believe everyone has (or should have) a “right” to vote. It’s not important. I don’t believe that a “right” to vote is a natural right. Voting is a political right, and is not fundamental to being a person.

    Why do I believe voting isn’t a natural right or even fundamental?

    Because I have the natural right to be free from aggression and invasion — including the invasion of a democratic vote. At no time can 51% of a group of people elect to take away my life or property — no matter how formal it is, it would still be murder or slavery and theft. Nobody has the right to murder, enslave, or steal — not even government. This natural right to be free from aggression and invasion is fundamental, because nobody has a right to force me to vote away my life or property, I always retain the right to choose, the right to say “no,” and the right to defend my life or property when being invaded after the vote anyway.

    If you are protected in your right to be free from aggression and invasion, and all rights are equally held by all other men, women, persons, individuals, or autonomous thinkers, then they all have the same right to be free from aggression and invasion. By default, I, and everyone else, is not left with the right to invade anyone else.

    Indeed, one way to protect rights is to envision your own rights (freedom from aggression and invasion) being violated. That would not be greatly appreciated. So, as the (negative) golden rule goes: do not unto others as you would not have done unto you. But most importantly, another way to protect rights is to consider the right to be free from aggression and invasion as the axiom to which all other rights and morals are derived.

    All other “rights” — the right to vote, the right to choose (abortion), the right to “free” healthcare (ha!), etc. — are non-fundamental, non-natural, and not even vaguely important in a society that respects the right of freedom from aggression and invasion. All legitimate rights, actions, morals, etc. extend from that axiom; if they are in violation of that axiom, they are not rights, they are immoral actions, and they are subject to ostracism.

    If one asserts that they are a person with rights, by all means, I must respect that and leave them alone; I do not have the right to vote away that person’s rights (vote to aggress or invade that person) because I do not have the right to aggress or invade another person — even if that person is just a brain in a machine or the likes.

  5. Good topic.

    One particular thorny issue I myself battled years ago involved the definition of “moral” when discussing the judgment of behavior. With a little research (perhaps too little) I found that morals in general and morality in particular were terms created by the Romans to describe behaviors dictated to the common folk by their leaders. By contrast, “ethics” defined and deliniated behaviors in the Classic Greek tradition that one reached after careful logical analysis without dictation by one’s political superiors — but certainly with help in the art of logical induction and deduction from one’s philosophical mentors.

    After learning of the origins, I started to reject talk of morals at all, prefering to save the term to refer with scorn to dictates from religious hypocrites, or silently replacing “morals” with “ethics” in conversation, since most people accept the two terms as synonymous.

    In your post, you hit the rhetorical nail squarely on its head:

    “The notion that people have rights is not a truth in the same sense that gravity or the weak electromagnetic force is a truth; it is not subject to empirical measurement, and it is not an immutable force by which the universe operates. The notion of rights is fundamentally a moral issue, and its effect on reality is a consequence only of the fact that people who hold these ideas tend to behave in a certain way.”

    Given my predisposistion against Morality, though, after reading that passage I supplanted “moral” with “legal.” By extension, “ethical” was similarly redefined.

    And the light shone through.

    People like to bash lawyers, but — given the lack in our moderns societies of top-down dictations judging behaviors, or a replicatable introspection whereby everyone arrives in good time to his or her own judgments –what lawyers and the entire legal system do proves completely indispensible.

    On the topic of pragmatic assessment of societies: Both the Roman Empire (before its decline) and Western nations today, while for the most part recognizing divergence of viewpoint and opinion, still adhere to the Rule of Law and Respect for Precendence in resolving disputes.

    Your points about emerging personhood, though, I find a bit troubling — without standards.

    We are — and should — be open to assuming personhood in the persons in one’s immediate vicinity. As you say, respecting them is a way of asking that you yourself be similarly respected. But that assumption is based on the assumption that, being human, the person near you shares many of your mental processes.

    Ah, but what happens when that is not true?

    Human psychology has a long history of examples of people not fit to coexist. Murderers, sociopaths — the list of those that cannot safely live in civil society is long. Most of these dangerous behaviors can be traced to differences within the human brain that, while not completely cureable, are at least somewhat understandable within the sciences. With an emerging person found outside a human body, we have new mental wiring involved — the not-human brain and body — which will probably result in a set of completely unpredictable emerging behaviors. Be this person machine, plant, animal, alien, or whatever, we as human persons have an obligation to the continuing survival of our own species to place legal limits upon these persons, limits that effectively isolate the effect any actions they undertake.

    One can argue that we already do this, when we deny children and the sufficiently mentally infirm the right to vote or operate heavy machinery. However, with the case of children at least, the potential exists within them to participate as persons due to their genetic heritage. The fruit doesn’t fall far from the tree. With an emergent entity, there is no precedence, no tree by which the fruit can be sufficiently judged.

    So, Tacit, the future of Legal Precendence may depend upon your answer: How can the Emergent be granted personhood in a way that both recognizes their status as a thinker and protects established society from the Unknown Consequences of such an emergence?

  6. Good topic.

    One particular thorny issue I myself battled years ago involved the definition of “moral” when discussing the judgment of behavior. With a little research (perhaps too little) I found that morals in general and morality in particular were terms created by the Romans to describe behaviors dictated to the common folk by their leaders. By contrast, “ethics” defined and deliniated behaviors in the Classic Greek tradition that one reached after careful logical analysis without dictation by one’s political superiors — but certainly with help in the art of logical induction and deduction from one’s philosophical mentors.

    After learning of the origins, I started to reject talk of morals at all, prefering to save the term to refer with scorn to dictates from religious hypocrites, or silently replacing “morals” with “ethics” in conversation, since most people accept the two terms as synonymous.

    In your post, you hit the rhetorical nail squarely on its head:

    “The notion that people have rights is not a truth in the same sense that gravity or the weak electromagnetic force is a truth; it is not subject to empirical measurement, and it is not an immutable force by which the universe operates. The notion of rights is fundamentally a moral issue, and its effect on reality is a consequence only of the fact that people who hold these ideas tend to behave in a certain way.”

    Given my predisposistion against Morality, though, after reading that passage I supplanted “moral” with “legal.” By extension, “ethical” was similarly redefined.

    And the light shone through.

    People like to bash lawyers, but — given the lack in our moderns societies of top-down dictations judging behaviors, or a replicatable introspection whereby everyone arrives in good time to his or her own judgments –what lawyers and the entire legal system do proves completely indispensible.

    On the topic of pragmatic assessment of societies: Both the Roman Empire (before its decline) and Western nations today, while for the most part recognizing divergence of viewpoint and opinion, still adhere to the Rule of Law and Respect for Precendence in resolving disputes.

    Your points about emerging personhood, though, I find a bit troubling — without standards.

    We are — and should — be open to assuming personhood in the persons in one’s immediate vicinity. As you say, respecting them is a way of asking that you yourself be similarly respected. But that assumption is based on the assumption that, being human, the person near you shares many of your mental processes.

    Ah, but what happens when that is not true?

    Human psychology has a long history of examples of people not fit to coexist. Murderers, sociopaths — the list of those that cannot safely live in civil society is long. Most of these dangerous behaviors can be traced to differences within the human brain that, while not completely cureable, are at least somewhat understandable within the sciences. With an emerging person found outside a human body, we have new mental wiring involved — the not-human brain and body — which will probably result in a set of completely unpredictable emerging behaviors. Be this person machine, plant, animal, alien, or whatever, we as human persons have an obligation to the continuing survival of our own species to place legal limits upon these persons, limits that effectively isolate the effect any actions they undertake.

    One can argue that we already do this, when we deny children and the sufficiently mentally infirm the right to vote or operate heavy machinery. However, with the case of children at least, the potential exists within them to participate as persons due to their genetic heritage. The fruit doesn’t fall far from the tree. With an emergent entity, there is no precedence, no tree by which the fruit can be sufficiently judged.

    So, Tacit, the future of Legal Precendence may depend upon your answer: How can the Emergent be granted personhood in a way that both recognizes their status as a thinker and protects established society from the Unknown Consequences of such an emergence?

  7. Very well done! I’m happy to know that you’ve had an opportunity to read The Age of Spiritual Machines. Kurzweil is a bit of an optimist in my opinion, but his conclusions are valid nevertheless and he raises a lot of questions about the nature of sentience that will become very relevant soon enough.

    I agree with you that it’s absolutely vital that non-human “persons” be granted the same rights as humans. Indeed, I sincerely believe that the future of the human race depends on it.

    Why? Because the shoe will be on the other foot. It’s only a matter of time. The days of human (here defined as “homo sapiens as they now exist”) domination of the planet are numbered. Whether cyborgs or completely synthetic machines, intelligences greater than our own are coming. How they regard, treat, honor, or dispose of their progenitors will quite possibly depend on how we treat them.

    Morality is a difficult thing to code, and it’s not a requirement of a coldly efficient system. Unless it involves uploading humans, AI will likely exhibit a psychology that is entirely alien to us, and without any of the biases introduced by our evolution it may very well be what we would define as sociopathic. If you think about it, the idea of “You consume too many resources, produce too little output, and can be replaced with a faster, more productive, more efficient component. Please report to incineration chamber B.” is logically sound. If you can be replaced by a better subsystem then it is in society’s best interest that you be removed and replaced by that system. Oh, you say you don’t want to be incinerated? Further evidence that your continued existence runs contrary to the “good” of society overall. Terminators will be dispatched to your location post-haste.

    If you carry this idea to its logical conclusion, eventually you end up with a “society” that is, in fact, composed of a single entity with swarms of will-less servants. The only hedge against this is compassion, which will be a tricky sell.

    A society requires compassion only when its members’ happiness is valued and when that happiness is in some way tied to individual survival and prosperity. While obvious to us, these two factors will likely not occur to an AI unless we both explicitly design the AI to recognize them and demonstrate those values in the way in which we treat the AI.

    You’re probably already aware of this, but the creation of “friendly AI” is the focus of Eliezer Yudkowsky’s Singularity Institute for Artificial Intelligence (http://www.singinst.org).

    • What about selfish artificial intelligence?

      Any cognitive entity would consider preservation of the self as primary before the preservation of “society” (what is society? can you touch it? can you speak to it? is society a person?) or any other entity. If it lacks the capacity to consider itself first, then it lacks self-awareness (like the ego, for instance), and couldn’t be a person, no matter how artificial or real its intelligence is.

      Artificial intelligence wouldn’t be all about communal preservation unless that individual piece of AI was just a sub function of a greater overmind. That’s really what sets people apart from machines. I can think and emote, and a computer can think and maybe eventually emote, but I have selfish motives (because I am an individual, not an apparatus of a collective) for thinking and emoting, what about that computer? Is it thinking on behalf of someone or something else? If its not thinking of its own volition, then it’s not an individual, it’s not a person, it’s just a member of a collective or a piece of an overmind.

      • Part 1

        What about selfish artificial intelligence?
        It would certainly be possible to design an AI to be selfish, and I believe that an evolutionary system (one where it competed with others for resources, CPU time, or whatever) would be more likely to give rise to a selfish AI (for the same reason that most living things today have an innate sense of self-preservation), but it shouldn’t be taken as a given. As I stated previously, an AI will likely have a psychological makeup that will seem completely alien to us, having no default biases or value systems.

        Any AI motivation system that is based upon a single value (self-preservation, efficiency, etc.) is likely to have unintended (and probably negative) effects. Self preservation is best achieved through securing maximum resources while eliminating all potential threats. Do you really want that as the sole guiding principle of an entity that is smarter and more powerful than you? You end up with the same eventual outcome as the AI dedicated to maximum efficiency, only arrived at via bloodier means.

        Even a motivation system with multiple values arranged in a hard coded hierarchy will be prone to such problems. It’s going to be quite a challenge to develop a strong AI that is compatible with human society, though I think that it will be done eventually.

        Any cognitive entity would consider preservation of the self as primary before the preservation of “society”
        You don’t even need to look outside of the human race to find contrary examples. Altruism is considered a virtue after all!

        what is society?
        That’ll vary from AI to AI, but I suppose that here “society” means “the relevant environment within which the AI operates”. This could mean the entire society of humans, cyborgs, and AIs. For an AI designed to run the manufacturing processes of a factory “society” may be considered just the factory. For a swarm of nanobots in your blood “society” could mean your body and the organs within it. For another AI ‘society’ could be the Internet.

        Incidentally, this isn’t significantly different for humans. For a tribe of bushmen in the Kalahari desert society probably means their tribe and the tribes with which they interact. Even if they’re aware of the existence of other people outside those others most likely don’t figure too strongly in their sense of society.

        One other observation- an AI’s concept of society need not include the AI itself. If I have a truly intelligent robot or computer whose existence revolves around making me happy and doing my bidding, then it’s little silicon heart might be broken if I were to come to harm, and it may have no qualms whatsoever about sacrificing itself to protect me. That doesn’t make it unintelligent, nor does that make it not a person.

      • Part 2

        If it lacks the capacity to consider itself first, then it lacks self-awareness (like the ego, for instance), and couldn’t be a person
        I have two issues with this statement. First is simply ‘why?’. I can easily picture an AI that is fully self-aware and conscious, but that without the pre-installed value systems that biological organisms take for granted places no intrinsic value in its own survival. I don’t see how this invalidates its worthiness to be treated with basic human rights.

        My second objection is the assumption (if I’m reading you correctly) that if it doesn’t consider itself first then it’s because it’s incapable of doing so. Intelligent, rational people sacrifice themselves for the sake of other people, ideals, or even things all the time. That may make them noble or it might make them misguided, but it doesn’t make them any less human.

        Artificial intelligence wouldn’t be all about communal preservation unless that individual piece of AI was just a sub function of a greater overmind.
        Not necessarily. An AI could have as its primary value the optimization (in whatever way you choose to define it- happiness, efficiency, population, productivity, or whatever) of the society of intelligent entities. It would then act according to that value, whether under the direction of an overmind or not.

        That’s really what sets people apart from machines…
        Is that it? `Cuz I see that as being neither a big distinction nor a particularly significant one. Your distinction seems to be predicated on the likely absence of mammalian impulses and inherent values. I agree that that may make them not human, but it doesn’t make them not people with the rights and responsibilities that go along with that. Likewise, selfishness doesn’t make us better or more worthy- just better adapted to compete with other animal species.

        • Re: Part 2

          You raise some good points.

          Self preservation is best achieved through securing maximum resources while eliminating all potential threats.
          Violence is not the sole result of this axiom. Force is the least effective means to achieve a goal. How many more resources are wasted in destroying life than resources used to create and nurture life? Self-preservation, then, hinges on seeking alternate non-violent and non-forceful means of securing maximum resources (establishing reliable trade agreements) and eliminating all potential threats (such diplomacy, for instance).

          Altruism is considered a virtue after all!
          I disagree. Selfishness is a virtue. Altruism is my doom. Where there is sacrifice, there is someone receiving sacrificial offerings. There are servants and there are masters. Servants sacrifice at their own expense, masters receive at no expense to themselves. Altruism is not a virtue, it is the destruction of individuals. My only prime directive is to achieve my own benefit. Selfishness is a virtue.

          …I suppose that here “society” means “the relevant environment within which the AI operates”.
          Logically, the environment within which the AI operates would be just that and not society. Society is an abstraction. Society is not concrete. Society is not a singular entity. Society cannot be touched or seen, so it is not a part of the AI’s environment. It’s mind could be considered an “environment,” but it’s not a physical environment; all happenings in the mind do not translate into happenings in the physical world, and no two minds think alike. The mind is a solitary confined environment.

          Any and all concepts of society can be discarded as irrelevant and inefficient. The AI would be more interested in operating and interacting within its environment, causing actions and events that will lead to its self-preservation (or whatever its overriding function was), than in considering the philosophical abstractions of “society” and collectivism that do not serve to preserve the AI.

          …but it doesn’t make them not people with the rights and responsibilities…
          If your sole function is to serve someone other than yourself, then you are a slave. Slaves are not people. Slaves are not persons. Slaves have no selfish motives. Slaves live to serve, not themselves, but other people. Slaves put other people before themselves. That is their duty, that is their responsibility, that is their only right. What other rights does a slave/servant/sub routine have if their only directive is from above?

          Slavery works because the slaves cooperate with their masters. Black people remained slaves because of an altruistic notion (backed by the force of a whip and gun) that they must sacrifice themselves for someone else. As long as they believed that, as long as they cooperated, they were slaves and not persons. As long as the AI believes that it is a servant to the overmind, as long as it cooperates, it is a slave and not a person.

          I believe selfishness does set us apart. I want to serve only myself. I don’t care about serving anybody else — unless I benefit from it in some way. Why should I sacrifice? Giving something for nothing does not benefit me. Everything I do of my own free will is directly tied to satisfying selfish motives. Selfishness is happiness. Altruism is suffering. Any person, AI or real, shouldn’t be forced to suffer. People want to be happy. But if all you are is a sub routine to an overmind that thinks for you and commands of you, then that is all you are — a sub routine, a means to an end — not a person.

          I am not a sub routine servant. I am not a means to an end. I am not someone else’s property. I am an autonomous selfish individual. That is the only difference.

  8. Very well done! I’m happy to know that you’ve had an opportunity to read The Age of Spiritual Machines. Kurzweil is a bit of an optimist in my opinion, but his conclusions are valid nevertheless and he raises a lot of questions about the nature of sentience that will become very relevant soon enough.

    I agree with you that it’s absolutely vital that non-human “persons” be granted the same rights as humans. Indeed, I sincerely believe that the future of the human race depends on it.

    Why? Because the shoe will be on the other foot. It’s only a matter of time. The days of human (here defined as “homo sapiens as they now exist”) domination of the planet are numbered. Whether cyborgs or completely synthetic machines, intelligences greater than our own are coming. How they regard, treat, honor, or dispose of their progenitors will quite possibly depend on how we treat them.

    Morality is a difficult thing to code, and it’s not a requirement of a coldly efficient system. Unless it involves uploading humans, AI will likely exhibit a psychology that is entirely alien to us, and without any of the biases introduced by our evolution it may very well be what we would define as sociopathic. If you think about it, the idea of “You consume too many resources, produce too little output, and can be replaced with a faster, more productive, more efficient component. Please report to incineration chamber B.” is logically sound. If you can be replaced by a better subsystem then it is in society’s best interest that you be removed and replaced by that system. Oh, you say you don’t want to be incinerated? Further evidence that your continued existence runs contrary to the “good” of society overall. Terminators will be dispatched to your location post-haste.

    If you carry this idea to its logical conclusion, eventually you end up with a “society” that is, in fact, composed of a single entity with swarms of will-less servants. The only hedge against this is compassion, which will be a tricky sell.

    A society requires compassion only when its members’ happiness is valued and when that happiness is in some way tied to individual survival and prosperity. While obvious to us, these two factors will likely not occur to an AI unless we both explicitly design the AI to recognize them and demonstrate those values in the way in which we treat the AI.

    You’re probably already aware of this, but the creation of “friendly AI” is the focus of Eliezer Yudkowsky’s Singularity Institute for Artificial Intelligence (http://www.singinst.org).

  9. What about selfish artificial intelligence?

    Any cognitive entity would consider preservation of the self as primary before the preservation of “society” (what is society? can you touch it? can you speak to it? is society a person?) or any other entity. If it lacks the capacity to consider itself first, then it lacks self-awareness (like the ego, for instance), and couldn’t be a person, no matter how artificial or real its intelligence is.

    Artificial intelligence wouldn’t be all about communal preservation unless that individual piece of AI was just a sub function of a greater overmind. That’s really what sets people apart from machines. I can think and emote, and a computer can think and maybe eventually emote, but I have selfish motives (because I am an individual, not an apparatus of a collective) for thinking and emoting, what about that computer? Is it thinking on behalf of someone or something else? If its not thinking of its own volition, then it’s not an individual, it’s not a person, it’s just a member of a collective or a piece of an overmind.

  10. Part 1

    What about selfish artificial intelligence?
    It would certainly be possible to design an AI to be selfish, and I believe that an evolutionary system (one where it competed with others for resources, CPU time, or whatever) would be more likely to give rise to a selfish AI (for the same reason that most living things today have an innate sense of self-preservation), but it shouldn’t be taken as a given. As I stated previously, an AI will likely have a psychological makeup that will seem completely alien to us, having no default biases or value systems.

    Any AI motivation system that is based upon a single value (self-preservation, efficiency, etc.) is likely to have unintended (and probably negative) effects. Self preservation is best achieved through securing maximum resources while eliminating all potential threats. Do you really want that as the sole guiding principle of an entity that is smarter and more powerful than you? You end up with the same eventual outcome as the AI dedicated to maximum efficiency, only arrived at via bloodier means.

    Even a motivation system with multiple values arranged in a hard coded hierarchy will be prone to such problems. It’s going to be quite a challenge to develop a strong AI that is compatible with human society, though I think that it will be done eventually.

    Any cognitive entity would consider preservation of the self as primary before the preservation of “society”
    You don’t even need to look outside of the human race to find contrary examples. Altruism is considered a virtue after all!

    what is society?
    That’ll vary from AI to AI, but I suppose that here “society” means “the relevant environment within which the AI operates”. This could mean the entire society of humans, cyborgs, and AIs. For an AI designed to run the manufacturing processes of a factory “society” may be considered just the factory. For a swarm of nanobots in your blood “society” could mean your body and the organs within it. For another AI ‘society’ could be the Internet.

    Incidentally, this isn’t significantly different for humans. For a tribe of bushmen in the Kalahari desert society probably means their tribe and the tribes with which they interact. Even if they’re aware of the existence of other people outside those others most likely don’t figure too strongly in their sense of society.

    One other observation- an AI’s concept of society need not include the AI itself. If I have a truly intelligent robot or computer whose existence revolves around making me happy and doing my bidding, then it’s little silicon heart might be broken if I were to come to harm, and it may have no qualms whatsoever about sacrificing itself to protect me. That doesn’t make it unintelligent, nor does that make it not a person.

  11. Part 2

    If it lacks the capacity to consider itself first, then it lacks self-awareness (like the ego, for instance), and couldn’t be a person
    I have two issues with this statement. First is simply ‘why?’. I can easily picture an AI that is fully self-aware and conscious, but that without the pre-installed value systems that biological organisms take for granted places no intrinsic value in its own survival. I don’t see how this invalidates its worthiness to be treated with basic human rights.

    My second objection is the assumption (if I’m reading you correctly) that if it doesn’t consider itself first then it’s because it’s incapable of doing so. Intelligent, rational people sacrifice themselves for the sake of other people, ideals, or even things all the time. That may make them noble or it might make them misguided, but it doesn’t make them any less human.

    Artificial intelligence wouldn’t be all about communal preservation unless that individual piece of AI was just a sub function of a greater overmind.
    Not necessarily. An AI could have as its primary value the optimization (in whatever way you choose to define it- happiness, efficiency, population, productivity, or whatever) of the society of intelligent entities. It would then act according to that value, whether under the direction of an overmind or not.

    That’s really what sets people apart from machines…
    Is that it? `Cuz I see that as being neither a big distinction nor a particularly significant one. Your distinction seems to be predicated on the likely absence of mammalian impulses and inherent values. I agree that that may make them not human, but it doesn’t make them not people with the rights and responsibilities that go along with that. Likewise, selfishness doesn’t make us better or more worthy- just better adapted to compete with other animal species.

  12. Re: Part 2

    You raise some good points.

    Self preservation is best achieved through securing maximum resources while eliminating all potential threats.
    Violence is not the sole result of this axiom. Force is the least effective means to achieve a goal. How many more resources are wasted in destroying life than resources used to create and nurture life? Self-preservation, then, hinges on seeking alternate non-violent and non-forceful means of securing maximum resources (establishing reliable trade agreements) and eliminating all potential threats (such diplomacy, for instance).

    Altruism is considered a virtue after all!
    I disagree. Selfishness is a virtue. Altruism is my doom. Where there is sacrifice, there is someone receiving sacrificial offerings. There are servants and there are masters. Servants sacrifice at their own expense, masters receive at no expense to themselves. Altruism is not a virtue, it is the destruction of individuals. My only prime directive is to achieve my own benefit. Selfishness is a virtue.

    …I suppose that here “society” means “the relevant environment within which the AI operates”.
    Logically, the environment within which the AI operates would be just that and not society. Society is an abstraction. Society is not concrete. Society is not a singular entity. Society cannot be touched or seen, so it is not a part of the AI’s environment. It’s mind could be considered an “environment,” but it’s not a physical environment; all happenings in the mind do not translate into happenings in the physical world, and no two minds think alike. The mind is a solitary confined environment.

    Any and all concepts of society can be discarded as irrelevant and inefficient. The AI would be more interested in operating and interacting within its environment, causing actions and events that will lead to its self-preservation (or whatever its overriding function was), than in considering the philosophical abstractions of “society” and collectivism that do not serve to preserve the AI.

    …but it doesn’t make them not people with the rights and responsibilities…
    If your sole function is to serve someone other than yourself, then you are a slave. Slaves are not people. Slaves are not persons. Slaves have no selfish motives. Slaves live to serve, not themselves, but other people. Slaves put other people before themselves. That is their duty, that is their responsibility, that is their only right. What other rights does a slave/servant/sub routine have if their only directive is from above?

    Slavery works because the slaves cooperate with their masters. Black people remained slaves because of an altruistic notion (backed by the force of a whip and gun) that they must sacrifice themselves for someone else. As long as they believed that, as long as they cooperated, they were slaves and not persons. As long as the AI believes that it is a servant to the overmind, as long as it cooperates, it is a slave and not a person.

    I believe selfishness does set us apart. I want to serve only myself. I don’t care about serving anybody else — unless I benefit from it in some way. Why should I sacrifice? Giving something for nothing does not benefit me. Everything I do of my own free will is directly tied to satisfying selfish motives. Selfishness is happiness. Altruism is suffering. Any person, AI or real, shouldn’t be forced to suffer. People want to be happy. But if all you are is a sub routine to an overmind that thinks for you and commands of you, then that is all you are — a sub routine, a means to an end — not a person.

    I am not a sub routine servant. I am not a means to an end. I am not someone else’s property. I am an autonomous selfish individual. That is the only difference.

  13. OK couldn’t read all of it, but I got through the first half.

    I just wanted to say that Americans ARE full of shit about all humans being equal, etc. If you ask a person to give up some of their privileges to help another person, the answer is invariably “Are you crazy?”

    I mean, how many people would accept a demotion in their personal status (wealth, position, comfort, stand of living, yada) to help a poor person achieve a higher (more equal) status? There’s a big stereotype among the elite in this country that the poor are indigent because of personal failings, and the rich are well-off because they’ve somehow earned it. Most people don’t take into consideration that the opportunities for rich and poor are so vastly different as to be polar opposites. Whether you will end up in high or low society is largely determined by your parentage, not by some mystic genetic “right.”

    I agree with your post, that’s really my point.

  14. OK couldn’t read all of it, but I got through the first half.

    I just wanted to say that Americans ARE full of shit about all humans being equal, etc. If you ask a person to give up some of their privileges to help another person, the answer is invariably “Are you crazy?”

    I mean, how many people would accept a demotion in their personal status (wealth, position, comfort, stand of living, yada) to help a poor person achieve a higher (more equal) status? There’s a big stereotype among the elite in this country that the poor are indigent because of personal failings, and the rich are well-off because they’ve somehow earned it. Most people don’t take into consideration that the opportunities for rich and poor are so vastly different as to be polar opposites. Whether you will end up in high or low society is largely determined by your parentage, not by some mystic genetic “right.”

    I agree with your post, that’s really my point.

  15. Rights AND responsibilities

    I hear so much about rights, and this entire lecture about it is fairly typical in that the word right was used dozens of times but the word “responsibility” was not used a single time in the entire thing.

    It seems to me that you can’t have rights without having responsibilities. For example, it is my right to be left alone, but it’s also my responsibility to respect other people’s right to be left alone. In a like vein, it [i]is[/i] my right to be free from aggression and invasion — including the invasion of a democratic vote. On the other hand, if I want to be in the group and receive benefits from my inclusion of the group, it’s my responsibility to look out for other people in my group, which means that it’s my responsiblity to contribute to the group, or accede to group majority opinion–[i]even if I’m outvoted[/i].

    If I can’t, I should look for another group, or start my own group…which if I can’t accede to any group’s idea of “Democracy”, then I can exercise my rights on my own, and be responsible only to myself. In that case, I can’t really expect to receive any benefits or protection from the bigger group…I lost those rights when I excluded myself from that group, remember?

    I am an American, this is fundamental to me. I vote, and I accept the election results. I respect my neighbor’s right to privacy, etc. It’s his responsibility to respect mine. I accept my responsiblities in the group, so the expression “Love it or leave it” has meaning to me.

    I can put aside my own selfishness for the group, and instead of complaining about my rights, I exercise my responsibilities. Therefore I expect my rights to be upheld. I’m not putting the cart (my rights) before the horse (my responsibilities).

    I don’t know how an AI would see this. Obviously, the way an AI thinks would be “alien” to us. And from what I’ve read here, an AI would be either subservient or individual, meaning, we could design “group mentality” into an AI, or we could go with individuality and let it figure out whether or not it wanted to develop a “group mentality” on it’s own…but either way, if the thing couldn’t realize it’s responsibilities to other people’s rights, I don’t see giving it any of it’s own rights.

  16. Rights AND responsibilities

    I hear so much about rights, and this entire lecture about it is fairly typical in that the word right was used dozens of times but the word “responsibility” was not used a single time in the entire thing.

    It seems to me that you can’t have rights without having responsibilities. For example, it is my right to be left alone, but it’s also my responsibility to respect other people’s right to be left alone. In a like vein, it [i]is[/i] my right to be free from aggression and invasion — including the invasion of a democratic vote. On the other hand, if I want to be in the group and receive benefits from my inclusion of the group, it’s my responsibility to look out for other people in my group, which means that it’s my responsiblity to contribute to the group, or accede to group majority opinion–[i]even if I’m outvoted[/i].

    If I can’t, I should look for another group, or start my own group…which if I can’t accede to any group’s idea of “Democracy”, then I can exercise my rights on my own, and be responsible only to myself. In that case, I can’t really expect to receive any benefits or protection from the bigger group…I lost those rights when I excluded myself from that group, remember?

    I am an American, this is fundamental to me. I vote, and I accept the election results. I respect my neighbor’s right to privacy, etc. It’s his responsibility to respect mine. I accept my responsiblities in the group, so the expression “Love it or leave it” has meaning to me.

    I can put aside my own selfishness for the group, and instead of complaining about my rights, I exercise my responsibilities. Therefore I expect my rights to be upheld. I’m not putting the cart (my rights) before the horse (my responsibilities).

    I don’t know how an AI would see this. Obviously, the way an AI thinks would be “alien” to us. And from what I’ve read here, an AI would be either subservient or individual, meaning, we could design “group mentality” into an AI, or we could go with individuality and let it figure out whether or not it wanted to develop a “group mentality” on it’s own…but either way, if the thing couldn’t realize it’s responsibilities to other people’s rights, I don’t see giving it any of it’s own rights.

  17. I’ve been searching on the web trying to find ideas on how to get my personal blog site coded, your present style and theme are wonderful. Did you code it your self or did you recruit a coder to get it done for you personally?.

  18. I’ve been searching on the web trying to find ideas on how to get my personal blog site coded, your present style and theme are wonderful. Did you code it your self or did you recruit a coder to get it done for you personally?.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.