Teaching a Dog Calculus

This is actually a post about transhumanism and Outside Context Problems, and an epiphany I had last time I was in Chicago.

But first…

God damn did I wake up with a bad case of the hornies this morning. Jesus Christ in Heaven, I want to fuck. I want to feel soft skin against mine. I want to trace the curve of the neck with teeth and tongue. I want to hear the little intake of breath when I discover a sensitive spot. I want to rest my hand on the curve of the hip, I want to explore the roundness of breast with my fingertips. I want to run fingernails lightly up the back of the neck and see goosebumps form. Holy fuck it’s distracting.

Also, when I crawled out of bed and walked stumbled into the bathroom this morning, I was all like “Ow! Ow! Ouch! Ow! What the hell?” Some time last night, it seems, the cat had scoured the house for every smallish, vaguely cylindrical object he could find, and hidden them all underneath the rug in the bathroom. Pens, a plastic travel tube of Advil, a small bullet vibrator, an AA battery…it was like walking on marbles. WTF?

None of that is what I’m actually here to say.


I’ve been thinking a great deal these days about Outside Context Problems. Put briefly, an Outside Context Problem is what happens when a group, society, or civilization encounters something so far outside its own context and understanding that it is not able even to understand the basic parameters of what it has encountered, much less deal with it successfully. Most civilizations encounter such a problem only once.

For example, you’re a Mayan king. Life is pretty good for you; you’ve built a civilization at the pinnacle of technological achievement, you’ve dominated and largely pacified any competition you might have, you’ve created many wondrous things, and life is pretty comfortable.

Then, all at once, out of the blue, some folks clad in strange, impervious silver armor show up at your doorstep. They carry long sticks that belch fire and kill from great distances; some of them appear to have four legs; they claim to come from a place that you have never in your entire life even conceived might exist…

Civilizations that encounter Outside Context Problems end. Even if some members of the civilization survive, the civilization itself is irrevocably changed beyond recognition. Nothing like the original Native American societies exists today in any form that the pre-Columbians would recognize.

Typically, we think of Outside Context Problems in terms of situations that arise when one society has contact with another society that’s radically different and technologically far more advanced. But I don’t think it necessarily has to be that way.


In a sense, we are, right now, hard at work building our own Outside Context Problem, and it’s going to be internal, not external.

Right now, as I type this, one of the hottest fields of biomedical research is brain mapping and modeling. I’ve mentioned several times in the past the research being done by a Swiss group to model a mammalian brain inside a supercomputer; such a model is essentially a neuron-by-neuron, connection-by-connection emulation of a brain in a computer. Such an emulation will, presumably, act exactly like its biological counterpart; it is the connections and patterns of information, not the physical wetware, that makes a brain act like it does.

This group claims to be ten years from being able to model a human brain inside a computer. Ten years, and we may see the advent of true AI.


Let me backtrack a little. The field of AI has, so far, been disappointing. For decades, we have struggled to program computers to be smart. The problem is, we don’t really quite know what we mean by “smart.” Intelligence is not an easily defined thing; and it’s not like you can sit down and break up generalized, adaptive intelligence into a sequence of steps.

Oh, sure, we’ve produced expert systems that can design computer chips, simulate bridges, and play chess far better than a human can. In fact, we don’t even have grandmaster-level human/machine chess tournaments any more, because the machines always win. Always. Deep Blue, the supercomputer that beat human grandmaster Garry Kasparov in a much-publicized tournament, is by modern standards a cripple; ordinary desktop PCs today are more powerful.

But these are simple, iterative tasks. A chess-playing computer isn’t smart. It can’t do anything besides play chess, and it approaches chess as a simple iterative mathematical problem. That’s about where AI has been for the last four decades.

New approaches, though, are not about programming computers to act smart. They are about taking systems which are smart–brains–and rebuilding them inside a computer. If this approach works, we will create our own Outside Context Problem.


Human brains are pretty pathetic, from a hardware standpoint. Our neurons are painfully, agonizingly slow. They are slow to respond, they are slow to fire, they are slow to reset after they have fired, and they are slow to form new connections. All these things limit our cognitive capabilities; they impose constraints on how adaptable our intelligence is, and how smart we can become.

Computers are fast. They encode new information rapidly and efficiently. Raw computing power available from a given square inch of silicon real estate doubles roughly every eighteen months. Modeling a brain in a computer removes many of the constraints; such a modeled brain can operate more quickly and more efficiently, and as more computer power becomes available, the complexity of the model–the number of neurons modeled, the richness of the interconnections between them–increases too.


We humans like to make believe that we are somehow the apex of creation–and not just of creation, but of all possible creation. It pleases us to imagine that we are created in the image of some divine heavenly architect–that the universe and everything in it was made by some sapient being, that that sapient being is recognizable to us, and that that sapient being is like us. We like to tell ourselves that thre is no limit to human imagination, that human intellect can understand and achieve anything, and so on.

Now, all of this is really embarrassingly self-serving. It’s also easy enough to deflate. The human imagination is indeed limited, though by definition limitations in the things you can conceive of tend to be hard to see, because you…can not conceive of things you can not conceive of. (As one person once challenged me, without apparent irony: “Name something the human imagination can’t conceive of!”)

But its relatively easy to find some of the boundaries of human imagination. For example:

• Imagine one apple. Just an apple, floating alone on a plain white background. Easy to do, right?
Imagine three apples, perhaps arranged in a triangle, floating in stark white nothingness. Simple, yes? Four apples. Picture four apples in your head. Got it?

Now, picture 17,431 apples in your head, each unique. Visualize all of them together, and make your mental image contain each of those apples separately and distinctly. Got it? I didn’t think so.

• Imagine a cube in your head. Think of all the faces of the cube and how they fit together, Rotate the imaginary cube in your head. Got it going? Good.

Now imagine a seventeen-dimensional cube in your head. Picture what it would look like rotating through seventeen-dimensional space. Got it?

The first example indicates one particular kind of boundary on our imaginations: our limited resolving power when it comes to holding discrete images in our imagination. The second shows another boundary; our imaginations are circumscribed by the limitations of our experiences, as perceived and interpreted through finite (and, it must be said, quite limited) senses. Quantum mechanics and astrophysics often pose riddles whose math suggests behaviors we have a great deal of difficulty imagining, because our imaginations were formed through the experiences of a very limited slice of the universe: medium-sized, medium-density mass-bearing objects moving quite slowly with respect to one another. Go outside those constraints, and we may be able to understand the math, but the reality of the way these systems works is, at best, right at the threshold of the limitations of our imaginations.


Everyone who has ever owned a dog knows that dogs are capable of a surprisingly sophisticated sort of reasoning. Dogs understand that they are separate entities; they interact with other entities, such as other dogs and humans, in complex ways; they can differentiate between other living entities and non-living entities, for the most part (tough I’ve seen dogs who are confused by television images); they have emotional responses that mirror, on a simple scale, human emotional responses; they are capable of planning, problem-solving, and analytical reasoning.

They can not, however, learn calculus.

No matter how smart your dog is, there are things it can not understand and will never understand because of the biological constraints on its brain. You will never teach a dog calculus; in fact, a dog is not capable of understanding what calculus is.

Yes, I know you think your dog is very smart. No, your dog can’t learn calculus. Yes, you can too, if you set your mind to it; the point here is that there are realms of knowledge unavailable to the entire species, because all dogs, no matter how smart they may be in comparison to other dogs, lack the necessary cognitive tools to get there.

The intelligence of every organism is circumscribed in part by that organism’s physical biology. And just as they are entire reals and categories of knowledge unavailable to a dog, so too are there realms of knowledge unavailable to us. What are they? I don’t know; I can’t see them. That’s exactly the point.


To get back to the idea of artificial intelligence: A generalized AI would in many ways not be subject to the same limitations we are. One nice thing about modeled brains that isn’t true of human brains is that we can easily tinker with them. The human brain is limited in the total number of neurons within it by the size and shape of the human pelvis; we can’t fit larger brains through the birth canal. We have, in essence, encountered a fundamental evolutionary barrier.

Similarly, we can’t easily make neurons faster; their speed is limited by the complex biochemical cascade of events which makes them fire (contrary to popular belief, neurons don’t communicate via electrical signals; they change state electrochemically, by the movement of charged ions across a membrane, and the speed with which a signal travels is dependent on the speed with which ions can propagate across the membrane and then be pumped back again). They are limited in how quickly they can learn new things by the speed with which neurons can grow new interconnections, which is pretty painful, really.

But a model of a brain? What if we double the number of neurons? Increase the speed at which they send signals? Increase the efficiency with which new connections form? These are all obvious and logical paths to explore.

And the thing about generalized AI is that it’s so goddamn useful. We want it, and we’re working very hard toward it, because there are just so many things that our current, primitive computers are poor at, that generalized Ai would be good at.

And one of those things, as it happens, is likely to be improving itself.


The first generalized AI will be a watershed. Even if it isn’t very smart, it can easily be put to the task of making AIs that are smarter. And smarter still. Hell, just advances in the underlying processor power of the computer beneath it–whatever that computer may look like–will probably make it smarter. Able to think faster, hold more information, remember more…and able to have whatever senses we give it, including senses our own physiology doesn’t have.

The first generalized AI might not be smarter than us, but subsequent ones will, oh yes. You can bank on that. And that soon presents an Outside Context Problem.

Because how do we relate to a sapience that’s smarter than we are?

In transhumanist circles, this is called a singularity–a change so profound that the people before the singularity can not imagine what life after the singularity is like.

There have been many singularities throughout human history. The development of agriculture, the Iron Age, the development of industrialization–all of these created changes so profound that a person living in a time before these things could not imagine what life after these things is like. However, the advent of smart and rapidly-improving AI is different, because it presents a singularity and an Outside Context Problem all rolled up into one.

In past singularities, the fundamental nature of human beings and human intelligence have not changed. A Bronze Age human is not necessarily dumber than an Iron Age human. Less knowledgeable, perhaps, but not dumber. The Bronze Age human could not anticipate Iron Age technology, but if they meet, they will still recognize each other.

But a smarter-than-us AI is different, in the ways we are different from a dog. We would not–we cannot–understand the perception or experience of something smarter than we are, ay more than a dog can understand what it means to be human. And that presents an interesting challenge indeed.

Civilizations tend not to survive contact with Outside Context Problems.


Which brings me, at last, to an epiphany that I had while I was walking with dayo in Chicago.

Transhumanism is the notion that human beings can become, with the application of intelligence and will, more than we are right now. I’ve talked about it a great deal in the past, and talked about some of the reasons I am a transhumanist.

But here’s a new one, and I think it’s important.

Strong AI is coming. It’s really only a matter of time. We are learning that our own intelligence is the result of physical processes within our brain, not the result of magical supernatural forces or spirits. We are working on applying the results of this knowledge to the problem of creating things that are not-us but that are smart like us.

Now, there are several ways we can approach this. One is by creating models of ourselves in computers; another is by using advances in nanotechnology and biomedical science to make ourselves smarter, and improve the capabilities of our wet and slow but still serviceable brains.

Or, we can create something not based on us at all; perhaps by using adaptive neural networks to model increasingly complex systems in a sort of artificial evolutionary system, trying things at random and choosing the smartest of those things until eventually we create something as smart as us, but self-improving and altogether different.

Regardless, we have a choice. We can make ourselves into this new whatever-it-is, or we can make something entirely independent from us.

However we make it, it will likely become our successor. Civilizations tend not to survive contact with Outside Context Problems.

If we are to be replaced–and I think, quite honestly, that that is only a matter of time as well–I would rather that we are replaced by us, by Humanity 2.0, than see us replaced by something that is entirely not-us. And I think transhumanism, refined down to its most simple essence, is the replacing of us by us, rather than by something that is not-us.

90 thoughts on “Teaching a Dog Calculus

  1. If you haven’t read the Uplift series by David Brin, you really really need to. It tackles almost every issue you’ve raised (albeit in a different context).

    Thanks for writing this though, it’s well stated.

          • oo, nifty. It may take me a bit to work through that but I always love new stories. The challenge is going to be reading it online. It’s the one spot I’m still a bit of a luddite… I rather like my hardcopy when it comes to novels.

          • I love reading my books on my computer – the downside is the transportability. It’s rather inconvenient for me to unplug and lug my 17″ laptop into the bathroom!

            But I’m hoping for them to work the bugs out of the various handheld devics (like the Kindle) and create better, *free* ways of uploading already-existing and already-owned digital files. As a lifelong bookworm, this product is exciting – now for the price to lower and the compatibility issues to be dealt with!

          • “I love reading my books on my computer”

            I envy you.

            I hope to someday be able to read books comfortably on a monitor.

            I love reading print on a page.

  2. If you haven’t read the Uplift series by David Brin, you really really need to. It tackles almost every issue you’ve raised (albeit in a different context).

    Thanks for writing this though, it’s well stated.

  3. As a Galactica fan, my immediate thought is “So say we all!”

    Great post, Tacit. I love transhumanism, and I find myself persuaded by your argument. I would much prefer to see Humanity 2.0.

    • I sometimes think there’s a thread of anti-transhumanism in BSG; the premise–that sapient machines are bad, that there are limits to the level of technology it’s safe to explore–seem a bit questionable to me. I do dig the consciousness-transferral thing the Cylons have got going on, though.

      • I would honestly frame the first theme differently.

        “What is a person?”

        Humans in wartime dehumanize the enemy. They’re not people-they’re cylons! And this becomes harder & harder over time. Especially when they have a (known) cylon officer.

        I don’t perceive a theme of “limits to technology” in the show. I mean, the show has a lovely retro look that they constructed by saying “the cylons have better electronic warfare.” It might just be that it’s there and I’m refusing to see it.

        But yeah, humans (& possibly the cylons) on the show seem trapped in some sort of cyclical story with no control over the roles foisted upon them.

  4. As a Galactica fan, my immediate thought is “So say we all!”

    Great post, Tacit. I love transhumanism, and I find myself persuaded by your argument. I would much prefer to see Humanity 2.0.

    • Apologies to Stephen Wright

      Yeah, you know how some dogs compulsively thump a hind leg when you scratch them in the right spot? Well, one day I found out that my dog would do different things if you scratched him in different places.

      Eventually, I had him doing my taxes. But I hated scratching him there.

      ~r

  5. I, too, would much rather see Humanity 2.0, but I’m not sure that I’m entirely swept up in your conviction that this AI is coming.

    A random thought – it wouldn’t be called AI any longer though, would it? Intelligence, however created or acquired, would only be “artificial” if it had something natural to compare to. One couldn’t argue that the intelligence gained by a computer truly capable of learning – of THINKING – was artificial at all. Not if it was created to mirror our brains, it would simply mean that it was a ‘siliconical’ computer and and we were biological computers – materials aside, the intelligence itself, per your post, would be just as natural.

    I’ll think more on this when my brain is less pickled. Nice post – I enjoyed reading.

    • > I’m not sure that I’m entirely swept up in your conviction that this AI is coming.

      I’m with tacit here. I work with the guy who came up with the high-order pattern detection algorithm behind data mining. That tech seriously scares me, and we have only just begun to explore its possibilities. It can find subtle and complex patterns in gigantic databases that humans just can’t detect.

      I also agree with jtroutman below that the software isn’t there yet, and may not be anytime soon. I think in some ways it’s a chicken-or-egg thing — we need that watershed tacit writes of. Only a very few of us are good enough coders to attempt such a project, and our organizational systems are ill-suited to support their work.

      ~r

      • Pickled and just awake, requests for clarification most welcome.

        It’s not the coding that doesn’t sweep me up. I completely believe that computers can do (and will be able to do even more, someday, eventually) more than humans. I can’t detect the pattern of cloud movement the world over, but a space shuttle with a camera can. (I realize the vast difference here, but to me the idea is the same. That algorithm is its only perspective, it’s only “thought.” Computers can out-do humans in all sorts of narrow-focus realms. Experiments have been done on systems beyond Deep Blue to see if a wider focus than chess (reduced mostly to a highly complex probability model.) One had topics from Shakespeare to Wit and the judges thought the Shakespeare expert was the computer program “because no person could know that much.” Finding patterns, yes. Mathematically related realms, yes. Narrow-focus topics, sometimes. The vastness of the human intelligence? I really don’t think so.

        Philosophically speaking, recreating the human brain with computer components and programming is what loses me. If I were a materialist, this would be a “duh” moment. However, I’m not, and it’s not. I realize that this has amusing irony to it, in that I can’t comprehend how a replica of my brain would be just like me, because that’s exactly what I think is getting at with the Outside Context Problem. Yes, I’m smiling at myself right now because of it. Yet I still don’t think that any replica of me, no matter how perfect, would be me. The instant it was turned on, it would have its perceptions, and I would have mine. It would have it’s location in the world, its mechanics, and its shortcomings – and I, mine.

    • re: AI

      “it wouldn’t be called AI any longer though, would it? Intelligence, however created or acquired, would only be “artificial” if it had something natural to compare to.”

      Well, most things that are called artificial are only imitations, such as artificial flowers, artificial teeth, “artificial barriers against women and minorities” (an example from TheFreeDictionary.com). Artificial Intelligence, if successful, will not merely be an imitation, it will be something genuine, in its own right (I’ve heard the term actual intelligence).

      Of course, artificial can mean “something created by man; produced rather than natural” (an artificial sweetener is still a suitable sweetener). Okay, but suppose that, as Franklin suggests, AI creates AI—intelligences created by intelligences NOT human. Suppose this trend continues indefinitely. Are these intelligences truly artificial anymore?

    • In this sense, it would be “artificial” as in “man-made,” rather than “artificial” as in “not real.” The intelligence would be just as natural, certainly, though I bet there’s a lot of folks who wouldn’t see it that way.

      “They’re machines! We can do whatever we like to them! They don’t have SOULS!” That sort of thing.

      I personally subscribe to personhood theory, the ethical system that says any cloass of things that is sapient is a person and has all the rights and privileges thereof, be it human beings or AIs or augmented animals or whatever. I suspect, though, that will be a minority view for the foreseeable future.

      • I’m still inclined to argue that it isn’t man-made. If it is gaining intelligence through patterns and conclusions drawn on its own, that wouldn’t seem to me to be “man-made” anymore than my intelligence is “parent-made.”

        If it actually had intelligence (which is what I’m not convinced it will have) then I would agree that it would have rights and privileges. I’m not sure I could stand myself if I thought that trees had spirits/souls and in the same mind said that honestly intelligent machines didn’t 😉

  6. I, too, would much rather see Humanity 2.0, but I’m not sure that I’m entirely swept up in your conviction that this AI is coming.

    A random thought – it wouldn’t be called AI any longer though, would it? Intelligence, however created or acquired, would only be “artificial” if it had something natural to compare to. One couldn’t argue that the intelligence gained by a computer truly capable of learning – of THINKING – was artificial at all. Not if it was created to mirror our brains, it would simply mean that it was a ‘siliconical’ computer and and we were biological computers – materials aside, the intelligence itself, per your post, would be just as natural.

    I’ll think more on this when my brain is less pickled. Nice post – I enjoyed reading.

  7. oo, nifty. It may take me a bit to work through that but I always love new stories. The challenge is going to be reading it online. It’s the one spot I’m still a bit of a luddite… I rather like my hardcopy when it comes to novels.

  8. I love reading my books on my computer – the downside is the transportability. It’s rather inconvenient for me to unplug and lug my 17″ laptop into the bathroom!

    But I’m hoping for them to work the bugs out of the various handheld devics (like the Kindle) and create better, *free* ways of uploading already-existing and already-owned digital files. As a lifelong bookworm, this product is exciting – now for the price to lower and the compatibility issues to be dealt with!

  9. I often smell a buried assumption, that somehow the species Homo Sapiens js somehow unsuited to this planet. As if the civilization we were born into is naturally the highest and best example of what humans are capable of, and if the civ isn’t good enough, then neither is the species. In order to somehow become adaptive to this place from which we originated, we have to change, not as individuals, not as a society, but at the DNA level, as a species.

    The reason a dog can’t do calculus, is because there’s no need for a dog to do calculaus. It’s not a feature of the dog’s environment.

    In a similar way, there’s a whole set of skills that human beings don’t really need to have as a species in order to live on this planet.

    The civilization might want us to have these skill sets, and the civ might want us to believe that its requirements are the same as the planet’s. But I sense a con job.

    There’s a lot for me to like about transhumansim, but I hate the idea that it’s compulsory for us to somehow trancend our human selves before we can really be present to the historical moment. It sounds too much like a corporate agenda wanting more and better worker drones for itself.

    Is there even such a thing as wilderness in a transhuman future?

    • re: AI transcendence

      “There’s a lot for me to like about transhumansim, but I hate the idea that it’s compulsory for us to somehow trancend our human selves before we can really be present to the historical moment. It sounds too much like a corporate agenda wanting more and better worker drones for itself.”

      I think a lot of people really do believe that we, as a species, are obligated to transcend (transcendentalists believe this, but only in the context of our “baser instincts”). This is the same kind of insistence that radical environmentalists roll with (“we have a responsibility to the planet”).

      So, yes, there are those who believe that we must “move up.” But there are also those who simply want to survive. That’s me.

      • Re: AI transcendence

        I think a lot of people really do believe that we, as a species, are obligated to transcend

        If this trancendence is something deep and profound and something to take very, very seriously, then it means that the barrier to transend is also something pretty powerful. And from my perspective, that’s a lot of romanticized bullshit.

        My truth is that we’re still in the stone age in so very many ways, and we distract ourselves with a lot of shiny, convinced, usually, that were’ deeper into this than we really are.

        Seeing the world as a kludgy mess held together with chewing gum and bailing wire, can be very depressing, but it can also reveal a lot of trivial room for improvement.

        Disposessing ourselves of the idea that this world is anything special: it makes transcendance not only possible, but doable for *everybody*.

        Oh, and this notion that most people are morons and only a few of us really get it- that’s also elitist crap.

        • Re: AI transcendence

          We are still in the Stone Age. That’s precisely the point! We’re amazingly primitive, bound by physiology that works well enough but not splendidly, and using a tecnology that is still very crude.

          But the rate of change, both in our technology and in our understanding of the principles of the physical world, are increasing exponentially. At some point, we will reach a threshold at which the rate of change becomes so great that human society itself changes in ways impossible for us to predict right now. I suspect that point is coming sooner rather than later.

    • As if the civilization we were born into is naturally the highest and best example of what humans are capable of, and if the civ isn’t good enough, then neither is the species.

      I disagree. I don’t think that it’s a basic premise of transhumanism that our current civilization is as advanced as humans are capable of. Instead, I think that most transhumanists believe that we are close to the point where civilization is as advanced as humans are going to make it, not because we are incapable of further advances but rather because we’ll no longer be the primary drivers of innovation. Once smarter-than-human general AI is developed we will, by definition, no longer be the intellectual top dogs on the planet. Add to that the accelerating self-improvement cycle (AI A is smart enough to develop the more sophisticated AI B, which is now smart enough to develop the even more sophisticated AI C, etc.) and it doesn’t take long before the contributions of unaugmented humans fade into the background.

      If smarter-than-human general AI were never developed then I’ve little doubt that human society could continue to advance for quite some time before we truly hit a cognitive wall that prevents us from going any further, but even the best efforts of Bill Joy and the Unibomber aren’t going to stop AI development, nor do I think they should. As an Extropian, I’m concerned with the more realistic, beneficial, and achievable (IMHO) goal of ensuring that there remains a satisfactory place for those who make the conscious choice to remain “natural” humans. There are already organizations out there whose goal it is to ensure that Homo Sapiens replacement, whether it be Homo Excelsior or a completely novel AI, have regard for the individual right of self-determination and are “friendly” to their biological forbears.

    • I often smell a buried assumption, that somehow the species Homo Sapiens js somehow unsuited to this planet. As if the civilization we were born into is naturally the highest and best example of what humans are capable of, and if the civ isn’t good enough, then neither is the species. In order to somehow become adaptive to this place from which we originated, we have to change, not as individuals, not as a society, but at the DNA level, as a species.

      We are, like all organisms, a product of our environment, and our nearly complete domination of every ecological niche we’ve moved into shows that we’re very well suited at what we do.

      The reason a dog can’t do calculus, is because there’s no need for a dog to do calculaus. It’s not a feature of the dog’s environment.

      If you mean that the dog doesn’t live in an environment that requires it to develop the kind of abstract cognitive abilities necessary to understand calculus, that’s true. nevertheless, the point here is that the dog’s brain imposes a limit on how deeply the dog can understand the universe–and our own brains also impose a limit on how deeply we can understand the universe. Eventually, we will encounter a threshold past which we are not capable, because of constraints on our biology, to learn more. Since we as a species have a thirst to learn more, and a tendency to use our knowledge to adapt ourselves, I suspect we (or at least, some of us) will, if given the chance, extend our cognitive capacity.

      We can exist on the planet just fine as we are; we can exist on the planet just fine with a Medieval, pre-industrial technology, or even as hunter-gatherers. But continued technological innovation is part of the human condition. It’s what we do.

      There’s a lot for me to like about transhumansim, but I hate the idea that it’s compulsory for us to somehow trancend our human selves before we can really be present to the historical moment. It sounds too much like a corporate agenda wanting more and better worker drones for itself.

      I’m not sure I’m understanding what you’re saying here. Is it compulsory for every individual to want to become a transhumanist? No, of course it isn’t! However, the drive to learn more is wired deeply into us, and the drive to use what we know to extend our understanding, and our control, over ourselves is also wired deeply into us.

      Corporations don’t have anything to do with it; they are just relatively recent economic constructs, and there’s no reason to assume they’ll last any longer than, say, feudal systems, or any other particular economic system has.

      Is there even such a thing as wilderness in a transhuman future?

      Of course there is. In fact, radical new technologies like nanoscale assemblers would likely solve what has until now been one of the limiting factors of industrial civilization.

      Right now, our technology is appallingly crude, and has changed little in kind since the first flint knives. We dig up something and then whack at it until it’s in the shape we want; that’s ow we make tools. The cost is high; we’re limited by energy and by available of raw resources.

      General-purpose assemblers offer a much cheaper and more efficient way to create thing–from a molecular level. From a standpoint of both cost and efficiency, it’s far better than present industrial techniques–no more digging up resources from the ground, tearing part the environment, to make things that it then becomes easier and cheaper to bury in a landfill than remake.

  10. I often smell a buried assumption, that somehow the species Homo Sapiens js somehow unsuited to this planet. As if the civilization we were born into is naturally the highest and best example of what humans are capable of, and if the civ isn’t good enough, then neither is the species. In order to somehow become adaptive to this place from which we originated, we have to change, not as individuals, not as a society, but at the DNA level, as a species.

    The reason a dog can’t do calculus, is because there’s no need for a dog to do calculaus. It’s not a feature of the dog’s environment.

    In a similar way, there’s a whole set of skills that human beings don’t really need to have as a species in order to live on this planet.

    The civilization might want us to have these skill sets, and the civ might want us to believe that its requirements are the same as the planet’s. But I sense a con job.

    There’s a lot for me to like about transhumansim, but I hate the idea that it’s compulsory for us to somehow trancend our human selves before we can really be present to the historical moment. It sounds too much like a corporate agenda wanting more and better worker drones for itself.

    Is there even such a thing as wilderness in a transhuman future?

  11. >Similarly, we can’t easily make neurons faster.

    I misparsed this sentence at first!

    You meant “we can’t increase the speed of neurons” and that’s true.

    I misread it as “we can’t speed up the production of neurons” and it turns out I had something interesting to say about that. You can do that! There’s great research by Elizabeth Gould on the subject (article here) and it turns out when you make new neurons, you get improved mood, cognitive development, and other tasty stuff.

    But then I did a double-take and realized I misread the subject.

    And then I decided to comment anyway because we all want better brainz.

  12. >Similarly, we can’t easily make neurons faster.

    I misparsed this sentence at first!

    You meant “we can’t increase the speed of neurons” and that’s true.

    I misread it as “we can’t speed up the production of neurons” and it turns out I had something interesting to say about that. You can do that! There’s great research by Elizabeth Gould on the subject (article here) and it turns out when you make new neurons, you get improved mood, cognitive development, and other tasty stuff.

    But then I did a double-take and realized I misread the subject.

    And then I decided to comment anyway because we all want better brainz.

  13. P.P.S. I used to give myself headaches in high school, trying to visualize a four-dimensional cube. I think I got pretty close sometimes, but boy howdy did it hurt.

  14. Interesting and well put, as always. You do sound somewhat like Vernor Vinge, I have to say.

    I disagree with the timeframe, as pretty every single target date ever spoken by AI researchers for “we will have Y AI in X years” has been missed.

    Some assessments on the computing power needed for a “real AI” from the early 1990s said we need 3 to perhaps 10 orders of magnitude greater computing power than was was available at the time. Well, ~15 years later, we have managed about 3 orders of magnitude. The rate of progress of computing power is slowing down, it is expected to continue to do so.

    Another issue is that we don’t have really good software and compilers to to deal with multiple processors working together yet, either. Yes, there are lot of systems to dividing a problem is lots of small pieces and having each node or processor work on a small piece of a workload, but that is not the same. Additionally, currently the latency between each computing element is very high (compared to being on the same CPU) even on the best systems.

    Based on the decreasing rate of computing power advancement, plus the additional complexity of modeling the brain that will appear as we start to actually do it, I think it will be 50 years or more before a “human equivalent brain” is simulated. Not that I would mind if it was sooner, of course.

    • All the AI timeframes have been missed, bit I think it’s interesting that in this case, the people involved aren’t AI researchers. (Or, if they are, they don’t see themselves that way.)

      The Swiss team isn’t actually setting out to create AI. Their goal is to make a dynamic computer model of a human brain in a computer, which I have a feeling will result in an AI, but they’re not doing it for that purpose; they’re doing it because if they can create a perfect , working model of a human brain all the way down to the cellular level, the idea goes, they can use it to model new psychoactive drugs and anticipate the behavior of those drugs without human trials. Though if the model has that kind of fidelity, I suspect it may, for all intents and purposes, be human.

      And that raises a whole ethical can o’ worms that I don’t know if the researchers have considered.

      They’re currently using a BlueGene/L supercomputer, on which they’ve successfully modeled a dynamic rat neocortex in real-time. The BlueGene computers use a number of novel techniques to reduce latency between different processors. IBM’s currently building the BlueGene/L’s successor, the BlueGene/P, which is scheduled to go online next year; they’re anticipating that it will be at least ten times faster than BlueGene/L, and possibly more, in real-world applications.

      In theory, a BlueGene/L has roughly the same raw computing horsepower as a human brain, though the architecture is vastly different and the computer’s nowhere near being intelligent on its own. If that’s true, though, the BlueGene/P will be at least an order of magnitude more capable than a human brain in terms of raw processing capacity, which leaves plenty of overhead for emulation. 🙂

  15. Interesting and well put, as always. You do sound somewhat like Vernor Vinge, I have to say.

    I disagree with the timeframe, as pretty every single target date ever spoken by AI researchers for “we will have Y AI in X years” has been missed.

    Some assessments on the computing power needed for a “real AI” from the early 1990s said we need 3 to perhaps 10 orders of magnitude greater computing power than was was available at the time. Well, ~15 years later, we have managed about 3 orders of magnitude. The rate of progress of computing power is slowing down, it is expected to continue to do so.

    Another issue is that we don’t have really good software and compilers to to deal with multiple processors working together yet, either. Yes, there are lot of systems to dividing a problem is lots of small pieces and having each node or processor work on a small piece of a workload, but that is not the same. Additionally, currently the latency between each computing element is very high (compared to being on the same CPU) even on the best systems.

    Based on the decreasing rate of computing power advancement, plus the additional complexity of modeling the brain that will appear as we start to actually do it, I think it will be 50 years or more before a “human equivalent brain” is simulated. Not that I would mind if it was sooner, of course.

  16. Apologies to Stephen Wright

    Yeah, you know how some dogs compulsively thump a hind leg when you scratch them in the right spot? Well, one day I found out that my dog would do different things if you scratched him in different places.

    Eventually, I had him doing my taxes. But I hated scratching him there.

    ~r

  17. > I’m not sure that I’m entirely swept up in your conviction that this AI is coming.

    I’m with tacit here. I work with the guy who came up with the high-order pattern detection algorithm behind data mining. That tech seriously scares me, and we have only just begun to explore its possibilities. It can find subtle and complex patterns in gigantic databases that humans just can’t detect.

    I also agree with jtroutman below that the software isn’t there yet, and may not be anytime soon. I think in some ways it’s a chicken-or-egg thing — we need that watershed tacit writes of. Only a very few of us are good enough coders to attempt such a project, and our organizational systems are ill-suited to support their work.

    ~r

  18. Deep Blue, the supercomputer that beat human grandmaster Garry Kasparov in a much-publicized tournament, is by modern standards a cripple; ordinary desktop PCs today are more powerful.
    I’m not so sure about that. This was only 11 years ago, Deep Blue included custom hardware to accelerate some of IBM’s particular chess-related functions, and the rate of advance in CPUs has slowed down quite a bit since around 2003 when Intel discovered their Pentium-4’s didn’t get any faster at 90 nanometers.

    Computers are fast.
    At completely different things, sure. Neural net training is pretty slow.

    Modeling a brain in a computer removes many of the constraints
    And imposes a massive emulation overhead. Modeling a brain physically seems a lot more plausible than doing it in software. But if we’re really good at it, we might be able to get a couple orders of magnitude more power, for an awful lot of money, but still wind up with fundamentally the same sort of thing we’re modeling. A silicon brain that thinks the way we do is not necessary going to get the advantages a computer has, or be able to conceive of anything outside of its own experience, either.

    One nice thing about modeled brains that isn’t true of human brains is that we can easily tinker with them.
    No, we can’t. Even simple software neural nets aren’t something where you can really identify which part does what. And a silicon brain that operates like our brains will have a consciousness that needs to be respected just like a person.

    contrary to popular belief, neurons don’t communicate via electrical signals; they change state electrochemically, by the movement of charged ions across a membrane, and the speed with which a signal travels is dependent on the speed with which ions can propagate across the membrane and then be pumped back again
    I guess I knew that, but that just makes the software emulation overhead much bigger, and means it can’t be physically modeled in anything solid-state.

    Strong AI is coming. It’s really only a matter of time.
    I’m not remotely convinced. It’s certainly possible that a neural system isn’t the only way to create thought, and that we may come up with another way. Within neural systems, though, we’re talking about a massive outlay of time and money for something that’s only going to be a strange, very smart person, that may or may not be inclined to act in our interests. Where’s the incentive for someone to do that?

  19. Deep Blue, the supercomputer that beat human grandmaster Garry Kasparov in a much-publicized tournament, is by modern standards a cripple; ordinary desktop PCs today are more powerful.
    I’m not so sure about that. This was only 11 years ago, Deep Blue included custom hardware to accelerate some of IBM’s particular chess-related functions, and the rate of advance in CPUs has slowed down quite a bit since around 2003 when Intel discovered their Pentium-4’s didn’t get any faster at 90 nanometers.

    Computers are fast.
    At completely different things, sure. Neural net training is pretty slow.

    Modeling a brain in a computer removes many of the constraints
    And imposes a massive emulation overhead. Modeling a brain physically seems a lot more plausible than doing it in software. But if we’re really good at it, we might be able to get a couple orders of magnitude more power, for an awful lot of money, but still wind up with fundamentally the same sort of thing we’re modeling. A silicon brain that thinks the way we do is not necessary going to get the advantages a computer has, or be able to conceive of anything outside of its own experience, either.

    One nice thing about modeled brains that isn’t true of human brains is that we can easily tinker with them.
    No, we can’t. Even simple software neural nets aren’t something where you can really identify which part does what. And a silicon brain that operates like our brains will have a consciousness that needs to be respected just like a person.

    contrary to popular belief, neurons don’t communicate via electrical signals; they change state electrochemically, by the movement of charged ions across a membrane, and the speed with which a signal travels is dependent on the speed with which ions can propagate across the membrane and then be pumped back again
    I guess I knew that, but that just makes the software emulation overhead much bigger, and means it can’t be physically modeled in anything solid-state.

    Strong AI is coming. It’s really only a matter of time.
    I’m not remotely convinced. It’s certainly possible that a neural system isn’t the only way to create thought, and that we may come up with another way. Within neural systems, though, we’re talking about a massive outlay of time and money for something that’s only going to be a strange, very smart person, that may or may not be inclined to act in our interests. Where’s the incentive for someone to do that?

  20. Pickled and just awake, requests for clarification most welcome.

    It’s not the coding that doesn’t sweep me up. I completely believe that computers can do (and will be able to do even more, someday, eventually) more than humans. I can’t detect the pattern of cloud movement the world over, but a space shuttle with a camera can. (I realize the vast difference here, but to me the idea is the same. That algorithm is its only perspective, it’s only “thought.” Computers can out-do humans in all sorts of narrow-focus realms. Experiments have been done on systems beyond Deep Blue to see if a wider focus than chess (reduced mostly to a highly complex probability model.) One had topics from Shakespeare to Wit and the judges thought the Shakespeare expert was the computer program “because no person could know that much.” Finding patterns, yes. Mathematically related realms, yes. Narrow-focus topics, sometimes. The vastness of the human intelligence? I really don’t think so.

    Philosophically speaking, recreating the human brain with computer components and programming is what loses me. If I were a materialist, this would be a “duh” moment. However, I’m not, and it’s not. I realize that this has amusing irony to it, in that I can’t comprehend how a replica of my brain would be just like me, because that’s exactly what I think is getting at with the Outside Context Problem. Yes, I’m smiling at myself right now because of it. Yet I still don’t think that any replica of me, no matter how perfect, would be me. The instant it was turned on, it would have its perceptions, and I would have mine. It would have it’s location in the world, its mechanics, and its shortcomings – and I, mine.

  21. re: AI

    “it wouldn’t be called AI any longer though, would it? Intelligence, however created or acquired, would only be “artificial” if it had something natural to compare to.”

    Well, most things that are called artificial are only imitations, such as artificial flowers, artificial teeth, “artificial barriers against women and minorities” (an example from TheFreeDictionary.com). Artificial Intelligence, if successful, will not merely be an imitation, it will be something genuine, in its own right (I’ve heard the term actual intelligence).

    Of course, artificial can mean “something created by man; produced rather than natural” (an artificial sweetener is still a suitable sweetener). Okay, but suppose that, as Franklin suggests, AI creates AI—intelligences created by intelligences NOT human. Suppose this trend continues indefinitely. Are these intelligences truly artificial anymore?

  22. re: AI transcendence

    “There’s a lot for me to like about transhumansim, but I hate the idea that it’s compulsory for us to somehow trancend our human selves before we can really be present to the historical moment. It sounds too much like a corporate agenda wanting more and better worker drones for itself.”

    I think a lot of people really do believe that we, as a species, are obligated to transcend (transcendentalists believe this, but only in the context of our “baser instincts”). This is the same kind of insistence that radical environmentalists roll with (“we have a responsibility to the planet”).

    So, yes, there are those who believe that we must “move up.” But there are also those who simply want to survive. That’s me.

  23. re: AI

    It would definitely seem that we will need to, at some point, replace all biological components with synthetic, for the simple reason that synthetics are more durable and more easily replaced than biological components, which, as we’ve learned (painfully), are neither durable nor easily replaced. Of course, there will be those who not only refuse the “upgrades” but deny others the right to have them, based on the belief that we somehow require bloody, mushy components to remain “human.” (The soul, it is argued, cannot exist within a machine, no matter how lifelike it seems. Are transhumanists denied the right to believe they have no soul?) The issue, it would seem to me, is one of the right to life, liberty, and the pursuit of happiness. Denied the right to upgrade—to survive the deterioration of our biological bodies—we’re denied the right to life.

    But I digress. My reply doesn’t really address your thoughts on AI, but it made me think, once again, just how inevitable upgrades must be.

    • Re: AI

      I’m up for that, and I think it presents an interesting sort of Ship of Theseus problem.

      Suppose biomedical nanotech reaches the stage of development at which cellular-level repair is possible. This is not outside the realm of possibility, and indeed might even happen within the lifetime of people who are alive today.

      And now suppose that biomedical nanotech exists which can not only perform cellular-level repair, but can replace neurons as they die or are damaged with synthetic equivalents that are more durable, but otherwise behave the same way and are wired up in the same patterns as the dead cells they replaced.

      Since brain cells die all the time (and are, mostly, not replaced), a person filled with such nanomachines would, as time goes on, gradually have parts of his brain replaced with synthetic analogues. The process would take many decades before a significant number of his brain cells had been replaced, and presumably his identity would be preserved during that time; after all, as things stand now, our identities are preserved even when those neurons are lost completely. After enough time had passed, his brain would be entirely synthetic. Would he still be the same person?

      • Re: AI

        “Suppose biomedical nanotech reaches the stage of development at which cellular-level repair is possible.”

        I’ve considered this for a long time, and indeed I do think it’s only a matter of time before such technology is possible/available, but I’m inclined to believe that synthetics, as you suggested, can do a better job, in much the same way that steel enables a bridge builder to do far more than he ever could with wood.

        “After enough time had passed, his brain would be entirely synthetic. Would he still be the same person?”

        I’m not the same person I was a year ago, or even a week ago. I only seem to be: to others, because I’ve not changed much; to myself, because my memories give me a sense of continuous identity (erase those memories, or replace them, and I’m a completely new agent).

      • Re: AI

        After enough time had passed, his brain would be entirely synthetic. Would he still be the same person?
        Just as in the Ship of Theseus problem, the answer is sort-of (if the philosophers would only admit it).

        But really, as long as I maintain a continuous sense of self, I would consider that to be me. The parts are not terribly important.

        • Re: AI

          That’s about where I’m at–if the identity is preserved continuously, then the infrastructure doesn’t really matter that much.

          I do, however, see a distinct difference between that, and the system that the Swiss guys are using–they take a subject (so far, a rat), kill it, disassemble the brain on a very clever bit of machinery that peels it one cell at a time and maps all the neurons and their connections, then rebuild it in their computer model. I’d be all for a system that replaced dead or dying brain cells continuously, but not so much for being killed and then rebuilt synthetically.

          • Re: AI

            I’d want to be damn sure it was reliable. Even then, it wouldn’t be quite the same, but I’d probably go for it if it would give me a lot more time, but I’d otherwise be dying soon.

            Have they figured out, yet, if memory is stored entirely in the layout and connections of the neurons? I’d hate to be rebuilt like that and then find out my memory was really in electrical charges that were lost.

          • Re: AI

            Long-term memory is definitely stored in the pattern of connections. That’s been demonstrated with experiments in which pigs and other animals are tight to perform tasks, then killed and cooled to near freezing with cryoprotectants in place of their blood, held there for a while, then revived. Our ability o do this is becoming fairly sophisticated (it’s of obvious medical usefulness, especially in cases of extreme trauma), and the animals show no loss of memory or cognitive impairments afterward.

  24. re: AI

    It would definitely seem that we will need to, at some point, replace all biological components with synthetic, for the simple reason that synthetics are more durable and more easily replaced than biological components, which, as we’ve learned (painfully), are neither durable nor easily replaced. Of course, there will be those who not only refuse the “upgrades” but deny others the right to have them, based on the belief that we somehow require bloody, mushy components to remain “human.” (The soul, it is argued, cannot exist within a machine, no matter how lifelike it seems. Are transhumanists denied the right to believe they have no soul?) The issue, it would seem to me, is one of the right to life, liberty, and the pursuit of happiness. Denied the right to upgrade—to survive the deterioration of our biological bodies—we’re denied the right to life.

    But I digress. My reply doesn’t really address your thoughts on AI, but it made me think, once again, just how inevitable upgrades must be.

  25. Which brings me, at last, to an epiphany that I had while I was walking with dayo in Chicago.

    Not to be a prick, but you realize that you and I have had that conversation and arrived at the same conclusion (“Mankind will eventually be supplanted, but we have the unique opportunity to become our own replacements.”) at least a couple of years ago? 🙂

    Either way, great post and a wonderful summary of the non-intuitive ramifications of AI. As with most of your posts, it should be required reading for anyone not wanting to be relegated to an Amish-like “natural preserve” in the future while the rest of civilization moves on.

    Something that you hinted at but didn’t explicitly state is the idea of Weak Superintelligent AI vs Strong Superintelligent AI. One of the big advantages of uploading is that you’re moving yourself into a substrate that can take advantage of the much more rapid advances in artificial computational hardware vs the relatively static wetware. An upload or other digital model of a human brain that takes advantage of increased processing speed without otherwise modifying its basic structure and functions is a weak superintelligent AI. It’s just as smart as a human but with a faster subjective experience of time, and thus still suffers from the same fundamental limitations of human cognition. An uploaded dog that’s running at a thousand time the speed of a meat dog brain would absolutely be superintelligent by canine standards, but would still be completely unable to learn calculus.

    Conversely, your example of increasing the complexity and storage capacity of an upload opens up the possibility of overcoming the inherent architectural limitations of human cognition, and thus creating a strong superintelligent AI. IMHO, that’s when things get really interesting…

    • Did we? Well, hmm. I blame the alcohol; it had a tendency to fritz out my clones’ telemetry systems before the rev. 127 patch.

      Seriously, though, the distinction between weak and strong AI is an important one, and I doubt that we’ll see the arrival of weak AI without strong AI soon after. If we build an AI from a bottom-up approach by emulating a brain in a computer, questions like “what happens if we increase the number of neurons in the prefrontal cortex by ten percent?” and “what happens if we increase the number of connections between neurons dramatically?” seem to be the next logical step. I’d be very surprised if experiments like that aren’t among the first things we do once we have a model that works.

      So I find it unlikely that we’ll end up with weak superintelligent AI but not strong superintelligent AI. And strong superintelligent AI is without question an Outside Context Problem.

  26. Which brings me, at last, to an epiphany that I had while I was walking with dayo in Chicago.

    Not to be a prick, but you realize that you and I have had that conversation and arrived at the same conclusion (“Mankind will eventually be supplanted, but we have the unique opportunity to become our own replacements.”) at least a couple of years ago? 🙂

    Either way, great post and a wonderful summary of the non-intuitive ramifications of AI. As with most of your posts, it should be required reading for anyone not wanting to be relegated to an Amish-like “natural preserve” in the future while the rest of civilization moves on.

    Something that you hinted at but didn’t explicitly state is the idea of Weak Superintelligent AI vs Strong Superintelligent AI. One of the big advantages of uploading is that you’re moving yourself into a substrate that can take advantage of the much more rapid advances in artificial computational hardware vs the relatively static wetware. An upload or other digital model of a human brain that takes advantage of increased processing speed without otherwise modifying its basic structure and functions is a weak superintelligent AI. It’s just as smart as a human but with a faster subjective experience of time, and thus still suffers from the same fundamental limitations of human cognition. An uploaded dog that’s running at a thousand time the speed of a meat dog brain would absolutely be superintelligent by canine standards, but would still be completely unable to learn calculus.

    Conversely, your example of increasing the complexity and storage capacity of an upload opens up the possibility of overcoming the inherent architectural limitations of human cognition, and thus creating a strong superintelligent AI. IMHO, that’s when things get really interesting…

  27. As if the civilization we were born into is naturally the highest and best example of what humans are capable of, and if the civ isn’t good enough, then neither is the species.

    I disagree. I don’t think that it’s a basic premise of transhumanism that our current civilization is as advanced as humans are capable of. Instead, I think that most transhumanists believe that we are close to the point where civilization is as advanced as humans are going to make it, not because we are incapable of further advances but rather because we’ll no longer be the primary drivers of innovation. Once smarter-than-human general AI is developed we will, by definition, no longer be the intellectual top dogs on the planet. Add to that the accelerating self-improvement cycle (AI A is smart enough to develop the more sophisticated AI B, which is now smart enough to develop the even more sophisticated AI C, etc.) and it doesn’t take long before the contributions of unaugmented humans fade into the background.

    If smarter-than-human general AI were never developed then I’ve little doubt that human society could continue to advance for quite some time before we truly hit a cognitive wall that prevents us from going any further, but even the best efforts of Bill Joy and the Unibomber aren’t going to stop AI development, nor do I think they should. As an Extropian, I’m concerned with the more realistic, beneficial, and achievable (IMHO) goal of ensuring that there remains a satisfactory place for those who make the conscious choice to remain “natural” humans. There are already organizations out there whose goal it is to ensure that Homo Sapiens replacement, whether it be Homo Excelsior or a completely novel AI, have regard for the individual right of self-determination and are “friendly” to their biological forbears.

  28. Interesting post. I just skimmed it, as my brain is not up to processing all that right now. But from what I got, it makes sense.

    RE: your case of the hornies. I’m home in Atlanta for the summer. I’d love to grab lunch and chat if you’re up for it.
    And your cat is WEIRD.

      • I’m in Roswell (not technically Atlanta, but whatever), but I can meet midway or closer to you if it’s more convenient. My schedule is pretty open.
        Email me at la.femme.nika AT gmail DOT com or AIM: thenika929 and we can figure something out.

  29. Interesting post. I just skimmed it, as my brain is not up to processing all that right now. But from what I got, it makes sense.

    RE: your case of the hornies. I’m home in Atlanta for the summer. I’d love to grab lunch and chat if you’re up for it.
    And your cat is WEIRD.

  30. Re: AI transcendence

    I think a lot of people really do believe that we, as a species, are obligated to transcend

    If this trancendence is something deep and profound and something to take very, very seriously, then it means that the barrier to transend is also something pretty powerful. And from my perspective, that’s a lot of romanticized bullshit.

    My truth is that we’re still in the stone age in so very many ways, and we distract ourselves with a lot of shiny, convinced, usually, that were’ deeper into this than we really are.

    Seeing the world as a kludgy mess held together with chewing gum and bailing wire, can be very depressing, but it can also reveal a lot of trivial room for improvement.

    Disposessing ourselves of the idea that this world is anything special: it makes transcendance not only possible, but doable for *everybody*.

    Oh, and this notion that most people are morons and only a few of us really get it- that’s also elitist crap.

  31. It’ll be more general, though it’s the family model I’ve been thinking about the most lately. But I do intend to talk about a wide variety of different models of relationship.

  32. It’ll be more general, though it’s the family model I’ve been thinking about the most lately. But I do intend to talk about a wide variety of different models of relationship.

  33. I often smell a buried assumption, that somehow the species Homo Sapiens js somehow unsuited to this planet. As if the civilization we were born into is naturally the highest and best example of what humans are capable of, and if the civ isn’t good enough, then neither is the species. In order to somehow become adaptive to this place from which we originated, we have to change, not as individuals, not as a society, but at the DNA level, as a species.

    We are, like all organisms, a product of our environment, and our nearly complete domination of every ecological niche we’ve moved into shows that we’re very well suited at what we do.

    The reason a dog can’t do calculus, is because there’s no need for a dog to do calculaus. It’s not a feature of the dog’s environment.

    If you mean that the dog doesn’t live in an environment that requires it to develop the kind of abstract cognitive abilities necessary to understand calculus, that’s true. nevertheless, the point here is that the dog’s brain imposes a limit on how deeply the dog can understand the universe–and our own brains also impose a limit on how deeply we can understand the universe. Eventually, we will encounter a threshold past which we are not capable, because of constraints on our biology, to learn more. Since we as a species have a thirst to learn more, and a tendency to use our knowledge to adapt ourselves, I suspect we (or at least, some of us) will, if given the chance, extend our cognitive capacity.

    We can exist on the planet just fine as we are; we can exist on the planet just fine with a Medieval, pre-industrial technology, or even as hunter-gatherers. But continued technological innovation is part of the human condition. It’s what we do.

    There’s a lot for me to like about transhumansim, but I hate the idea that it’s compulsory for us to somehow trancend our human selves before we can really be present to the historical moment. It sounds too much like a corporate agenda wanting more and better worker drones for itself.

    I’m not sure I’m understanding what you’re saying here. Is it compulsory for every individual to want to become a transhumanist? No, of course it isn’t! However, the drive to learn more is wired deeply into us, and the drive to use what we know to extend our understanding, and our control, over ourselves is also wired deeply into us.

    Corporations don’t have anything to do with it; they are just relatively recent economic constructs, and there’s no reason to assume they’ll last any longer than, say, feudal systems, or any other particular economic system has.

    Is there even such a thing as wilderness in a transhuman future?

    Of course there is. In fact, radical new technologies like nanoscale assemblers would likely solve what has until now been one of the limiting factors of industrial civilization.

    Right now, our technology is appallingly crude, and has changed little in kind since the first flint knives. We dig up something and then whack at it until it’s in the shape we want; that’s ow we make tools. The cost is high; we’re limited by energy and by available of raw resources.

    General-purpose assemblers offer a much cheaper and more efficient way to create thing–from a molecular level. From a standpoint of both cost and efficiency, it’s far better than present industrial techniques–no more digging up resources from the ground, tearing part the environment, to make things that it then becomes easier and cheaper to bury in a landfill than remake.

  34. Re: AI transcendence

    We are still in the Stone Age. That’s precisely the point! We’re amazingly primitive, bound by physiology that works well enough but not splendidly, and using a tecnology that is still very crude.

    But the rate of change, both in our technology and in our understanding of the principles of the physical world, are increasing exponentially. At some point, we will reach a threshold at which the rate of change becomes so great that human society itself changes in ways impossible for us to predict right now. I suspect that point is coming sooner rather than later.

  35. I’m in Roswell (not technically Atlanta, but whatever), but I can meet midway or closer to you if it’s more convenient. My schedule is pretty open.
    Email me at la.femme.nika AT gmail DOT com or AIM: thenika929 and we can figure something out.

  36. I sometimes think there’s a thread of anti-transhumanism in BSG; the premise–that sapient machines are bad, that there are limits to the level of technology it’s safe to explore–seem a bit questionable to me. I do dig the consciousness-transferral thing the Cylons have got going on, though.

  37. In this sense, it would be “artificial” as in “man-made,” rather than “artificial” as in “not real.” The intelligence would be just as natural, certainly, though I bet there’s a lot of folks who wouldn’t see it that way.

    “They’re machines! We can do whatever we like to them! They don’t have SOULS!” That sort of thing.

    I personally subscribe to personhood theory, the ethical system that says any cloass of things that is sapient is a person and has all the rights and privileges thereof, be it human beings or AIs or augmented animals or whatever. I suspect, though, that will be a minority view for the foreseeable future.

  38. All the AI timeframes have been missed, bit I think it’s interesting that in this case, the people involved aren’t AI researchers. (Or, if they are, they don’t see themselves that way.)

    The Swiss team isn’t actually setting out to create AI. Their goal is to make a dynamic computer model of a human brain in a computer, which I have a feeling will result in an AI, but they’re not doing it for that purpose; they’re doing it because if they can create a perfect , working model of a human brain all the way down to the cellular level, the idea goes, they can use it to model new psychoactive drugs and anticipate the behavior of those drugs without human trials. Though if the model has that kind of fidelity, I suspect it may, for all intents and purposes, be human.

    And that raises a whole ethical can o’ worms that I don’t know if the researchers have considered.

    They’re currently using a BlueGene/L supercomputer, on which they’ve successfully modeled a dynamic rat neocortex in real-time. The BlueGene computers use a number of novel techniques to reduce latency between different processors. IBM’s currently building the BlueGene/L’s successor, the BlueGene/P, which is scheduled to go online next year; they’re anticipating that it will be at least ten times faster than BlueGene/L, and possibly more, in real-world applications.

    In theory, a BlueGene/L has roughly the same raw computing horsepower as a human brain, though the architecture is vastly different and the computer’s nowhere near being intelligent on its own. If that’s true, though, the BlueGene/P will be at least an order of magnitude more capable than a human brain in terms of raw processing capacity, which leaves plenty of overhead for emulation. 🙂

  39. I’m still inclined to argue that it isn’t man-made. If it is gaining intelligence through patterns and conclusions drawn on its own, that wouldn’t seem to me to be “man-made” anymore than my intelligence is “parent-made.”

    If it actually had intelligence (which is what I’m not convinced it will have) then I would agree that it would have rights and privileges. I’m not sure I could stand myself if I thought that trees had spirits/souls and in the same mind said that honestly intelligent machines didn’t 😉

  40. Re: AI

    I’m up for that, and I think it presents an interesting sort of Ship of Theseus problem.

    Suppose biomedical nanotech reaches the stage of development at which cellular-level repair is possible. This is not outside the realm of possibility, and indeed might even happen within the lifetime of people who are alive today.

    And now suppose that biomedical nanotech exists which can not only perform cellular-level repair, but can replace neurons as they die or are damaged with synthetic equivalents that are more durable, but otherwise behave the same way and are wired up in the same patterns as the dead cells they replaced.

    Since brain cells die all the time (and are, mostly, not replaced), a person filled with such nanomachines would, as time goes on, gradually have parts of his brain replaced with synthetic analogues. The process would take many decades before a significant number of his brain cells had been replaced, and presumably his identity would be preserved during that time; after all, as things stand now, our identities are preserved even when those neurons are lost completely. After enough time had passed, his brain would be entirely synthetic. Would he still be the same person?

  41. Re: AI

    “Suppose biomedical nanotech reaches the stage of development at which cellular-level repair is possible.”

    I’ve considered this for a long time, and indeed I do think it’s only a matter of time before such technology is possible/available, but I’m inclined to believe that synthetics, as you suggested, can do a better job, in much the same way that steel enables a bridge builder to do far more than he ever could with wood.

    “After enough time had passed, his brain would be entirely synthetic. Would he still be the same person?”

    I’m not the same person I was a year ago, or even a week ago. I only seem to be: to others, because I’ve not changed much; to myself, because my memories give me a sense of continuous identity (erase those memories, or replace them, and I’m a completely new agent).

  42. Re: AI

    After enough time had passed, his brain would be entirely synthetic. Would he still be the same person?
    Just as in the Ship of Theseus problem, the answer is sort-of (if the philosophers would only admit it).

    But really, as long as I maintain a continuous sense of self, I would consider that to be me. The parts are not terribly important.

  43. Re: AI

    That’s about where I’m at–if the identity is preserved continuously, then the infrastructure doesn’t really matter that much.

    I do, however, see a distinct difference between that, and the system that the Swiss guys are using–they take a subject (so far, a rat), kill it, disassemble the brain on a very clever bit of machinery that peels it one cell at a time and maps all the neurons and their connections, then rebuild it in their computer model. I’d be all for a system that replaced dead or dying brain cells continuously, but not so much for being killed and then rebuilt synthetically.

  44. Re: AI

    I’d want to be damn sure it was reliable. Even then, it wouldn’t be quite the same, but I’d probably go for it if it would give me a lot more time, but I’d otherwise be dying soon.

    Have they figured out, yet, if memory is stored entirely in the layout and connections of the neurons? I’d hate to be rebuilt like that and then find out my memory was really in electrical charges that were lost.

  45. Re: AI

    Long-term memory is definitely stored in the pattern of connections. That’s been demonstrated with experiments in which pigs and other animals are tight to perform tasks, then killed and cooled to near freezing with cryoprotectants in place of their blood, held there for a while, then revived. Our ability o do this is becoming fairly sophisticated (it’s of obvious medical usefulness, especially in cases of extreme trauma), and the animals show no loss of memory or cognitive impairments afterward.

  46. Did we? Well, hmm. I blame the alcohol; it had a tendency to fritz out my clones’ telemetry systems before the rev. 127 patch.

    Seriously, though, the distinction between weak and strong AI is an important one, and I doubt that we’ll see the arrival of weak AI without strong AI soon after. If we build an AI from a bottom-up approach by emulating a brain in a computer, questions like “what happens if we increase the number of neurons in the prefrontal cortex by ten percent?” and “what happens if we increase the number of connections between neurons dramatically?” seem to be the next logical step. I’d be very surprised if experiments like that aren’t among the first things we do once we have a model that works.

    So I find it unlikely that we’ll end up with weak superintelligent AI but not strong superintelligent AI. And strong superintelligent AI is without question an Outside Context Problem.

  47. I would honestly frame the first theme differently.

    “What is a person?”

    Humans in wartime dehumanize the enemy. They’re not people-they’re cylons! And this becomes harder & harder over time. Especially when they have a (known) cylon officer.

    I don’t perceive a theme of “limits to technology” in the show. I mean, the show has a lovely retro look that they constructed by saying “the cylons have better electronic warfare.” It might just be that it’s there and I’m refusing to see it.

    But yeah, humans (& possibly the cylons) on the show seem trapped in some sort of cyclical story with no control over the roles foisted upon them.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.