Your daily dose of teh ky00t

This is Liam.

Liam is cursed with that same irresistible urge that gave us hairless naked apes the iPod, the steam engine, and nearly complete domination over all the earth: curiosity. If I place a box anywhere in my apartment, even if it’s simply a bottled water box that is set to go out with the trash, Liam will not rest until he has been over, under, around, and through it. He’s compelled, you see. He loves novelty, and he wants to know what it’s all about.

He’ll usually sleep in any box I put on or near the floor, at least for a few days. When it ceases to be novel and interesting, he grows tired of it and returns to sleeping at the foot of the bed with me. Like us naked apes, he’s curious and also fickle in his attentions.


Curiosity is a pretty sophisticated trait for an animal whose brain is smaller than my fist and not very wrinkly. In terms of raw processing power, a dozen Liams put together would compare pretty poorly to an IBM Blue Gene/L supercomputer, a much more computtionally powerful, yet singularly uncurious, piece of equipment.

Liam is actually pretty sophisticated in many of his behaviors. A couple weeks ago, he made a face at me.

It happened while I was eating frozen TV dinner apples. Microwave baked apples are tasty and delicious, and I make a point to eat them regularly. Five minutes in the microwave and you can have a small black plastic tray of bliss.

So there I was, sitting by my desk playing World of Warcraft and eating microwave baked apples, and Liam hopped up onto the desk and, brazen as you please, reached into my black plastic tray of bliss with his paw, hooked out a small piece of apple, brought it up to his nose, sniffed it suspiciously, licked it, and made a face at me. He shook the apple off his paw in disgust and wrinkled his nose at me.

Then he watched me eating the apples for several minutes, stole another bit of apple, sniffed at it even more suspiciously, and made another face at me.

There are many ways one might respond to this. One might say “Aww! How cute!” (And really, it was.) One might say “Hey! That’s my food! Don’t put your paw in that!” (And really, I did, though I knew even as I said it that it was pointless-an exercise more for my benefit than for the cats. We naked walking monkeys are kind of insecure in our position that way.) One might push the cat off the desk sternly. (And really, I didn’t have the heart to, because i dote on the cat so. A pushover, I am.)

Or, if one’s inclination runs that way, one might sit back and ponder the surprising degree of cognitive prowess the cat possesses.

I mean, seriously, think about it.

The cat recognized that I was eating something. We take that for granted, but there’s a lot of intellectual horsepower being brought to bear on a task of that sort. First, it means that he was able to map a projection of himself onto a projection on me well enough to be able to determine what kind of activity I was engaged in, and to recognize that it’s an activity he also engages in, despite great physical dissimilarities between us. That, at its foundation, means he was able to recognize the difference between himself and the rest of the world, and to recognize that some things in the world are more like him than other things in the world, to recognize those things when he sees them, and to recognize patterns of behavior common to he and I even as he recognized that I am distinct from him. Human babies take rather a long time to sort all this out.

Then, he was able to make an inference–namely, that what I was eating might be something e would like to eat as well. He made this inference in the absence of other cues, such as smell; he is, after all, a carnivore, and he is uninterested in a tray of baked apples just sitting by itself. (I know; I tried. What can I say? I was curious, too. He probably thinks they small like rotting plant matter.)

When he made this inference, he was able to formulate and then implement a plan of action, which shows at least a very limited ability to plan, even if only in a simple way.

When he obtained a piece of apple and decided it was just as revolting as it smells, he was then faced with a conundrum; this stuff was revolting, but clearly I was eating it (and with great gusto and no small amount of satisfaction, I might add). So he was willing to re-evaluate his original decision, and put it to the test again–something, the cynic in me begs to point out, that appears beyond the cognitive grasp of many people I know.


A couple of weeks ago, in a repeat of the I am not Sir Edmund fucking Hillary debacle that left me stranded on the balcony with a rope in my hand, Shelly went onto the porch to do some tidying up and the door locked behind her, trapping her until I came home for lunch.

Liam, in another example of cognitive dexterity (the only kind he has, I fear, as he is a stunningly clumsy cat), recognized that she was trapped, and became highly distressed and agitated. That shows empathy–the ability to map himself onto her and to respond as if he was the one in the distressing situation. He also knew that the door’s latch was to blame, and pawed and batted at it in a charming but unsuccessful bid to release her. Lack of opposing thumbs, and all that.


A Blue Gene/L system has, at very rough estimation, approximately the same processing power as a human brain. The Blue Gene/P supercomputer, currently in development, will well and truly trounce human beings in terms of processing ability. However, the architecture is very, very different. Modern computers are just really big, really complex Von Neumann machines, bound by the fact that the processing and memory are distinct entities which interact with one another in a series of discrete state changes.

A brain cell can roughly be mapped onto a transistor in the sense that it has only two discrete states, “firing” and “not firing,” but the architectural similarities pretty much end there.

Still, they are both finite state machines with memory, handwaving and nattering of Roger Penrose aside. And it is an axiom of state machines and formal language theory, which I will leave as an exercise to te reader to explore further, that any universal Turing machine, which is a finite state machine with memory, can, given sufficient memory, emulate any other universal Turing machine.

Which means that, given sufficient cleverness on our parts, it should be possible to take these wonderful brains of ours and emulate them in these crude computers of ours, without loss of fidelity.

Handwaving and nattering of Roger Penrose aside. (“Look! Consciousness is a quantum phenomenon! I don’t know anything about quantum physics, neurophysiology, consciousness, or cognitive science, but consciousness is a quantum phenomenon! I have no proof of this, so watch as I wave my hands!” But I digress.)

And, of course, when you emulate one kind of machine (yes, I said it, brains are machines, deal with it) on another kind of machine, if the host machine is sufficiently faster than the emulated machine, the emulation of the emulated machine is faster than the real thing.

Chew on that for a while.


I love Liam. He’s very sweet, and he is a constant little reminder in my life of figment_j. I continue to be impressed by the range of cognitive flexibility we take for granted, even in relatively unsophisticated animals, and I can hardly wait until we start building machines which can exhibit the same kind of cognitive skills.

We’re not there yet, but we will be soon. When IBM makes a supercomputer that has Liam’s level of cognitive prowess, the Singularity will well and truly be nigh.

56 thoughts on “Your daily dose of teh ky00t

  1. This all of course suggest to me if we in fact have the processing power then our computer software sucks. It sucks hard and long in comparison to Liam’s software. Though personally I’ve always wondered what all the rest of the nerves in our body are doing. I know some reflexes are encoded in extremities I wonder how this distribution of other tasks and hardwired responses plays into our overall thinking i.e. I wonder if Liam has a bit more processing power than we think. Perhaps only 4.5 of him would be necessary to equal Blue Gene.

    • Our software sucks in part, I think, because it’s tied to our computer architecture. Liam’s brain isn’t a von Neumann machine, and the software we write is the simplest and most logical kind of software for von Neumann machines. A great deal of intelligent behavior is, I think, emergent, and we don’t design systems where it is likely to emerge from. And because it is emergent behavior, attempts so far to emulate it directly by a top-down approach to machine intelligence haven’t been outstanding in their success.

  2. This all of course suggest to me if we in fact have the processing power then our computer software sucks. It sucks hard and long in comparison to Liam’s software. Though personally I’ve always wondered what all the rest of the nerves in our body are doing. I know some reflexes are encoded in extremities I wonder how this distribution of other tasks and hardwired responses plays into our overall thinking i.e. I wonder if Liam has a bit more processing power than we think. Perhaps only 4.5 of him would be necessary to equal Blue Gene.

  3. Cats continually amaze me with how smart they are for such small creatures. We had beautiful weather the other day and I opened all the windows. One of my cats, after sitting in the window for a few hours, decided that she would rather be in the Outside World and clawed through the screen. Of course it is too bad that she is terrified of the outside and merely huddled under the bushes until we opened the front door, but still she regonized that there was something between her and the Outside and figured out how to get through it. Curiosity is both a wonderful and dangerous thing.

    • Curiosity is an amazing cognitive tool, though it seems more useful for predators than for prey animals. It’s pretty uncommon to see monkey curiosity in a prey animal–partly, I think, because the unknown to a prey animal is often accompanied by fangs and claws.

  4. Cats continually amaze me with how smart they are for such small creatures. We had beautiful weather the other day and I opened all the windows. One of my cats, after sitting in the window for a few hours, decided that she would rather be in the Outside World and clawed through the screen. Of course it is too bad that she is terrified of the outside and merely huddled under the bushes until we opened the front door, but still she regonized that there was something between her and the Outside and figured out how to get through it. Curiosity is both a wonderful and dangerous thing.

  5. It’s been said that the higher animals, especially those that have been socialized with humans, have similar thought processes to human children at various developmental stages. I have yet to see research supporting this assertion, but it makes a decent working hypothesis–it describes and it predicts with better than average accuracy.
    Only, humans grow out of being children, whereas animals seem to maintain some of the mental characteristics that are for us hall marks of incomplete development. Not being able to reason beyond their own experience, mostly, or make logical inductions or deductions.
    Only, on third thought, quite a lot of humans don’t have that ability either. and they’re much less curious (not to mention adorable) than Liam.

    Tell me again why so many are convinced our whole species is superior?

    I wonder what sort of personalities the first AIs will develop.

    • It’s been said that the higher animals, especially those that have been socialized with humans, have similar thought processes to human children at various developmental stages. I have yet to see research supporting this assertion, but it makes a decent working hypothesis–it describes and it predicts with better than average accuracy.

      Of course, it makes sense that successfully domesticated animals, especially pets, would adopt behavioral traits we associate with our young. Those animals which have those traits are more likely to evoke a protective response in us, so it’s easy to see why domestication might apply very heavy adaptive pressure in favor of behavioral and cognitive traits similar to those of our children.

  6. It’s been said that the higher animals, especially those that have been socialized with humans, have similar thought processes to human children at various developmental stages. I have yet to see research supporting this assertion, but it makes a decent working hypothesis–it describes and it predicts with better than average accuracy.
    Only, humans grow out of being children, whereas animals seem to maintain some of the mental characteristics that are for us hall marks of incomplete development. Not being able to reason beyond their own experience, mostly, or make logical inductions or deductions.
    Only, on third thought, quite a lot of humans don’t have that ability either. and they’re much less curious (not to mention adorable) than Liam.

    Tell me again why so many are convinced our whole species is superior?

    I wonder what sort of personalities the first AIs will develop.

  7. > A Blue Gene/L system has, at very rough estimation, approximately the same processing power as a human brain.

    How exactly did they measure that?

    Even given that the computer itself has the same number of transistors as a human brain has neurons, that’s not exactly a meaningful statement because we still don’t really know how our neurons work. (I’m pretty sure they work a hell of a lot better than transistors, though.)

    And as you so eloquently demonstrated, your cat is smarter for all practical intents and purposes than any computer we know how to make. In fact, I would bet hard cash that in hand-to-hand combat the cat would chew through a power cable before the machine will code itself a cat-killing program. Much of that is a software problem, I suppose, although again here the metaphor breaks down.

    Whatever “quantum” means this week, consciousness certainly seems to be an emergent phenomenon — like life itself (surprise, surprise). I’d hesitate to rely on a backup of my brain that anyone made without knowing exactly what it emerges from.

    ~r

    • The specs I’ve seen are based on (very rough) estimates of the total number of calculations per second of a human brain. Brains have more neurons, but they operate about nine orders of magnitude more slowly, so a simple comparison of the number of components isn’t terribly useful.

      The thing that matters, though, is what those components are used for. Computer architectures are so vastly different from organic brains that it’s really tough to compare the two.

      Regardless of whether or not a brain and a Blue Gene/L have the same processing power, though (which does seem a reasonable claim, I think), the fact remains that soon we will reach the point where the processing power of a computer vastly exceeds that of a brain; Blue Gene/P will be coming online soon, and it most certainly can out-compute our gray wetware.

      That suggests there must be some point at which a conventional computer can, if we program it correctly, emulate the functioning of a brain in real time. I’m a mechanist; everything the brain does, it does in a physical way. I do not believe that the presence of a mystical “soul” is necessary for consciousness; rather, as you say, consciousness is an emergent phenomenon. Emulate a human brain, down to the neuronal level, with sufficient fidelity, and I believe human consciousness will emerge from that emulation.

  8. > A Blue Gene/L system has, at very rough estimation, approximately the same processing power as a human brain.

    How exactly did they measure that?

    Even given that the computer itself has the same number of transistors as a human brain has neurons, that’s not exactly a meaningful statement because we still don’t really know how our neurons work. (I’m pretty sure they work a hell of a lot better than transistors, though.)

    And as you so eloquently demonstrated, your cat is smarter for all practical intents and purposes than any computer we know how to make. In fact, I would bet hard cash that in hand-to-hand combat the cat would chew through a power cable before the machine will code itself a cat-killing program. Much of that is a software problem, I suppose, although again here the metaphor breaks down.

    Whatever “quantum” means this week, consciousness certainly seems to be an emergent phenomenon — like life itself (surprise, surprise). I’d hesitate to rely on a backup of my brain that anyone made without knowing exactly what it emerges from.

    ~r

  9. I recall a situation when my cat wolfie (in my icon) was a kitten- she loves anything white & creamy looking- she thinks it’s her right & demands it- yogurt, ice cream, milk….
    mashed potatoes! *lol*…
    so when she was 4 months old I was eating a lean cuisine for lunch that had a circle of mashed potatoes…
    she took one look & aggressively jumped! into my very hot, microwaved mashed potatoes.. and ran very quickly across the rest of the room, yelping inconsolately…
    not being the brightest of cats she could still sense pain of the highest degree…
    she officially decided not to harass my lunches that are housed into black plastic bins…
    ever again!

    • Heh. So cats, like people, can make cognitive errors in their generalization. Makes sense; there’s definitely a survival value to over-generalization in the case of harmful or dangerous things.

  10. I recall a situation when my cat wolfie (in my icon) was a kitten- she loves anything white & creamy looking- she thinks it’s her right & demands it- yogurt, ice cream, milk….
    mashed potatoes! *lol*…
    so when she was 4 months old I was eating a lean cuisine for lunch that had a circle of mashed potatoes…
    she took one look & aggressively jumped! into my very hot, microwaved mashed potatoes.. and ran very quickly across the rest of the room, yelping inconsolately…
    not being the brightest of cats she could still sense pain of the highest degree…
    she officially decided not to harass my lunches that are housed into black plastic bins…
    ever again!

  11. “He also knew that the door’s latch was to blame, and pawed and batted at it in a charming but unsuccessful bid to release her.”

    This scenario brought a tear to my eye. I swear!

    Sometimes, pets do the things for us that we want the most. 🙂

    Other times, they just yack on the rug.

  12. “He also knew that the door’s latch was to blame, and pawed and batted at it in a charming but unsuccessful bid to release her.”

    This scenario brought a tear to my eye. I swear!

    Sometimes, pets do the things for us that we want the most. 🙂

    Other times, they just yack on the rug.

  13. It pleases me more than I care to admit to know that I’m not the only one who thinks about these kinds of things. I look at APAR in exactly the same way.

    I was marvelling at the Blue Gene/L just last week, and had considered making a post about it. Its sustained processing power is the equivalent of every man, women, and child on the planet performing 60,000 flawless calculations every second, and its intra-node communication network has sufficient bandwidth to support 150 simultaneous phone conversations for every person in the US. Truly awesome when you think about it like that!

    As others have pointed out, the trick now (from a general AI perspective) is getting software complex enough to instantiate what we would recognize as general intelligence, flexible enough to learn, modify and enhance itself, and robust enough to do this all without crashing. A non-trivial task, but absolutely within the realm of the possible, and maybe even within the time frame that Ray Kurzweil has predicted (by 2020).

    Then of course there’s the minor issue of ensuring that its actions are within the realm of what we would consider benevolent, but that’s a secondary detail. 😉

  14. It pleases me more than I care to admit to know that I’m not the only one who thinks about these kinds of things. I look at APAR in exactly the same way.

    I was marvelling at the Blue Gene/L just last week, and had considered making a post about it. Its sustained processing power is the equivalent of every man, women, and child on the planet performing 60,000 flawless calculations every second, and its intra-node communication network has sufficient bandwidth to support 150 simultaneous phone conversations for every person in the US. Truly awesome when you think about it like that!

    As others have pointed out, the trick now (from a general AI perspective) is getting software complex enough to instantiate what we would recognize as general intelligence, flexible enough to learn, modify and enhance itself, and robust enough to do this all without crashing. A non-trivial task, but absolutely within the realm of the possible, and maybe even within the time frame that Ray Kurzweil has predicted (by 2020).

    Then of course there’s the minor issue of ensuring that its actions are within the realm of what we would consider benevolent, but that’s a secondary detail. 😉

  15. That is a truly amazing and very well-told entry. I feel the only appropriate way to respond is with my own tale. I’ve told it several times, and I’m not sure anyone I’ve told believes it, but I assure you it’s true. I was a student at Northeastern University for a couple of years in Boston, a fairly cramped campus with a fairly sizeable pigeon population. One time when I was walking towards one of the sciences buildings I saw a pigeon pecking at a crumb on the curb of the sidewalk when it accidentally knocked it off the curb and down onto the sidewalk. What made me stop and watch was that the pigeon didn’t peck at the crumb when it hopped down next to where it landed. It didn’t even touch it with its beak, much less try to pick it up. It hopped to a place behind the crumb and looked down at it, then back up to where it was on the curb. It hopped sideways a few times, first right, then back left, circling behind the crumb. Then it hopped up to the curb, and looked down at the crumb. A few more times it hopped down and looked up, then hopped up and looked down… finally it hopped down, got right behind the crumb, and with a single peck knocked the crumb right back up to the curb right where it had originally fallen from.

    I was astounded. Rather than just continue with its practical function of eating, it stopped, examined the situation, and decided to try something unnecessary and impractical, apparently just to challenge itself. That was no lucky poke – it examined the lie and lined up its shot! Watching that bird play golf with its snack was, and remains to this day, the most human-seeming thing I’ve ever seen a non-human creature do. It was illuminating.

    “Must be a physics major!”, I had said aloud to the bird after the performance. I admit it, I spoke to it – I couldn’t help it. I couldn’t just walk away and say nothing. It did look up at me for a second or two in response, as if to at least acknowledge my appreciation of its feat, but it quickly returned to its meal and I returned to my walking. – ZM

    • Where avian intelligence seems to fall down is in generalized problem-solving. Birds are smart, but I recall talking about avian intelligence experiments in one of my cognitive science classes back in my misspent college days. Birds seem capable of single-step problem solving, but incapable generally of multistep problem-solving (first do this, then do that), possibly as a result of an inability to make concrete predictions about the future and then to make decisions based on those predictions.

      Which, to be fair, is a pretty tough thing to do, and the fact that we and other animals do it so effortlessly really says something about how amazing and sophisticated our cognitive abilities are.

  16. That is a truly amazing and very well-told entry. I feel the only appropriate way to respond is with my own tale. I’ve told it several times, and I’m not sure anyone I’ve told believes it, but I assure you it’s true. I was a student at Northeastern University for a couple of years in Boston, a fairly cramped campus with a fairly sizeable pigeon population. One time when I was walking towards one of the sciences buildings I saw a pigeon pecking at a crumb on the curb of the sidewalk when it accidentally knocked it off the curb and down onto the sidewalk. What made me stop and watch was that the pigeon didn’t peck at the crumb when it hopped down next to where it landed. It didn’t even touch it with its beak, much less try to pick it up. It hopped to a place behind the crumb and looked down at it, then back up to where it was on the curb. It hopped sideways a few times, first right, then back left, circling behind the crumb. Then it hopped up to the curb, and looked down at the crumb. A few more times it hopped down and looked up, then hopped up and looked down… finally it hopped down, got right behind the crumb, and with a single peck knocked the crumb right back up to the curb right where it had originally fallen from.

    I was astounded. Rather than just continue with its practical function of eating, it stopped, examined the situation, and decided to try something unnecessary and impractical, apparently just to challenge itself. That was no lucky poke – it examined the lie and lined up its shot! Watching that bird play golf with its snack was, and remains to this day, the most human-seeming thing I’ve ever seen a non-human creature do. It was illuminating.

    “Must be a physics major!”, I had said aloud to the bird after the performance. I admit it, I spoke to it – I couldn’t help it. I couldn’t just walk away and say nothing. It did look up at me for a second or two in response, as if to at least acknowledge my appreciation of its feat, but it quickly returned to its meal and I returned to my walking. – ZM

  17. It’s not clear to me that Blue Gene has anything like the computational power that Liam does. One doesn’t need to get quantum goofy like Penrose to acknowledge that it’s very hard to get a handle on just what the computational abilities of organic brains are. You can make the case that a neuron is a lot more like a CPU than like a transistor, in which case the whole damn Internet is worth about one kitty brain.

    If we knew enough, we could probably build a distributed application like SETI@Home which if sufficiently popular could simulate Liam, though not in real time. But it wouldn’t be as cute.

    • Sufficiently popular? Just call it “Pussy@Home” and you’ll have more clients than a Russian bot net!

      As far as neurons go, I suspect that the reality is somewhere between a transistor and a CPU. In many neurons there’s certainly more going on than a single switch, but I suspect that a CPU, even something like an 8080, might be pushing it. I’d suggest an integrated circuit as being roughly analogous to a neuron, which still makes Liam’s little bitty kitty brain fairly impressive.

    • It’s going a bit far, I think, to say a neuron is an analog of a processor.

      A neuron is more sophisticated than a transistor in that a neuron, even though it has only two states, changes states based on more than one input, whereas a transistor (typically) changes states based on a single input. Neurons have a number of dendrites, and may fire based on input from multiple dendrites at once (spatial summing) or based on repeated input from a single dendrite over a short span of time (temporal summing), so it’s actually performing a “calculation” in the sense that the threshold of activation is a function of the summation of multiple inputs. But it still has only a single output and only a couple of finite states.

      A similar type of system can be built from multiple transistors, and the advantage of a transistor network is that it is much more efficient; not only are neurons painfully slow, but each state change is accompanied by a refractory period during which the neuron can not change states again even if the correct input condition is reached. Because transistors operate more efficiently, and about nine orders of magnitude more quickly, it’s not too much of a stretch to say that a smaller number of transistors is, in theory, sufficient to emulate the behavior of a much larger number of neurons. The details, of course, are in the implementation.

      • At one level of abstraction you can think of neurons as doing a weighted combination of inputs and occasionally firing. But that’s a pretty high level of abstraction. A single neuron (like any cell) is a complex structured system changing in response to, and putting out, a variety of chemicals which in turn affect neighboring (and distant cells), on a range of time scales.

        Would you have to simulate every molecule at the quantum level to get an accurate approxmation to brain function? Almost certainly not. Could you do it by simulating a electrical network of bistable linear threshold units? I very much doubt it, but you’ve got me curious about how much that level of abstraction still is used in neuroscience.

        • Would you have to simulate every molecule at the quantum level to get an accurate approxmation to brain function? Almost certainly not. Could you do it by simulating a electrical network of bistable linear threshold units? I very much doubt it, but you’ve got me curious about how much that level of abstraction still is used in neuroscience.

          I can’t speak for modern models; my last formal classwork in neurophysiology was in 1988, at which time the bistable model was very much in vogue. Since then, it’s been discovered that glial cells (specifically, astrocytes), once thought to be support cells uninvolved in the actual operation of the brain save for nourishing and holding neurons, may play a role in promoting and directing new patterns of connections.

          Right now, the strong AI community (tht is, the folks working on ‘true,’ general purpose machine intelligence, as opposed to the weak AI community, who work on expert systems, chess playing programs, and so on) is divided loosely into two camps–the “bottom up” and “top down” contingents.

          The bottom-up camp, which I personally believe is the approach more likely to succeed, says “Well, we already have examples of general-purpose intelligent systems. If we model these systems with sufficient fidelity, the model should exhibit the same traits, even if we don’t fully understand how the systems work.”

          The top-down guys say “It’s not the hardware that matters; it’s the behavior. If we extract and define precisely enough the behaviors that make up what we call “intelligence,” then we can implement those behaviors without caring about the details of the implementation.”

          Now, I personally have a strong suspicion that general intelligence is an emergent phenomenon, and that the top-down guys are unlikely to get very far in trying to break it down into discrete processes. However, the top-down guys do make one very strong point, and that is it is not practical (or indeed even possible) to emulate a brain all the way down to a quantum level. Of necessity, our models are simplified; they do not include every single detail of the real brain. But if we do not know for certain which attributes of a brain are most important for intelligence, we can not be sure that by simplifying the model, we’re not omitting some element vital to intelligence. (This point is made very strongly by the fact that glial cells, which until recently have not been thought essential and have not been part of most attempts to model a brain, may play a crucial role in the formation of long-term memory.)

          Still, as you say, I don’t think it’s necessary to model the brain clear down to the subatomic level. And if the model itself behaves as it should, then it doesn’t matter that the substrate of the model is running on hardware composed of bistable transistors. Brain cells may be more sophisticated than this, but if they, and their patterns of interconnections, are still Turing machines, then it doesn’t matter.

  18. It’s not clear to me that Blue Gene has anything like the computational power that Liam does. One doesn’t need to get quantum goofy like Penrose to acknowledge that it’s very hard to get a handle on just what the computational abilities of organic brains are. You can make the case that a neuron is a lot more like a CPU than like a transistor, in which case the whole damn Internet is worth about one kitty brain.

    If we knew enough, we could probably build a distributed application like SETI@Home which if sufficiently popular could simulate Liam, though not in real time. But it wouldn’t be as cute.

  19. Sufficiently popular? Just call it “Pussy@Home” and you’ll have more clients than a Russian bot net!

    As far as neurons go, I suspect that the reality is somewhere between a transistor and a CPU. In many neurons there’s certainly more going on than a single switch, but I suspect that a CPU, even something like an 8080, might be pushing it. I’d suggest an integrated circuit as being roughly analogous to a neuron, which still makes Liam’s little bitty kitty brain fairly impressive.

  20. I just read this fine little essay aloud to Sparkler lying here in bed. She was much pleased, but as the biologist she commented, “Recognizing that another creature is eating something is right up there at the top of the league standings — along with recognizing that another creature wants to kill you — as something that brains have been selected for.”

    Cheers,

    Alan M.

    • “Recognizing that another creature is eating something is right up there at the top of the league standings — along with recognizing that another creature wants to kill you — as something that brains have been selected for.”

      True dat. Though I think many prey animals have taken a shortcut on the latter by assuming that everything is trying to kill them, and behaving accordingly. Cuts down on the processing cycles, y’know.

  21. I just read this fine little essay aloud to Sparkler lying here in bed. She was much pleased, but as the biologist she commented, “Recognizing that another creature is eating something is right up there at the top of the league standings — along with recognizing that another creature wants to kill you — as something that brains have been selected for.”

    Cheers,

    Alan M.

  22. Our software sucks in part, I think, because it’s tied to our computer architecture. Liam’s brain isn’t a von Neumann machine, and the software we write is the simplest and most logical kind of software for von Neumann machines. A great deal of intelligent behavior is, I think, emergent, and we don’t design systems where it is likely to emerge from. And because it is emergent behavior, attempts so far to emulate it directly by a top-down approach to machine intelligence haven’t been outstanding in their success.

  23. Curiosity is an amazing cognitive tool, though it seems more useful for predators than for prey animals. It’s pretty uncommon to see monkey curiosity in a prey animal–partly, I think, because the unknown to a prey animal is often accompanied by fangs and claws.

  24. It’s been said that the higher animals, especially those that have been socialized with humans, have similar thought processes to human children at various developmental stages. I have yet to see research supporting this assertion, but it makes a decent working hypothesis–it describes and it predicts with better than average accuracy.

    Of course, it makes sense that successfully domesticated animals, especially pets, would adopt behavioral traits we associate with our young. Those animals which have those traits are more likely to evoke a protective response in us, so it’s easy to see why domestication might apply very heavy adaptive pressure in favor of behavioral and cognitive traits similar to those of our children.

  25. The specs I’ve seen are based on (very rough) estimates of the total number of calculations per second of a human brain. Brains have more neurons, but they operate about nine orders of magnitude more slowly, so a simple comparison of the number of components isn’t terribly useful.

    The thing that matters, though, is what those components are used for. Computer architectures are so vastly different from organic brains that it’s really tough to compare the two.

    Regardless of whether or not a brain and a Blue Gene/L have the same processing power, though (which does seem a reasonable claim, I think), the fact remains that soon we will reach the point where the processing power of a computer vastly exceeds that of a brain; Blue Gene/P will be coming online soon, and it most certainly can out-compute our gray wetware.

    That suggests there must be some point at which a conventional computer can, if we program it correctly, emulate the functioning of a brain in real time. I’m a mechanist; everything the brain does, it does in a physical way. I do not believe that the presence of a mystical “soul” is necessary for consciousness; rather, as you say, consciousness is an emergent phenomenon. Emulate a human brain, down to the neuronal level, with sufficient fidelity, and I believe human consciousness will emerge from that emulation.

  26. Heh. So cats, like people, can make cognitive errors in their generalization. Makes sense; there’s definitely a survival value to over-generalization in the case of harmful or dangerous things.

  27. Where avian intelligence seems to fall down is in generalized problem-solving. Birds are smart, but I recall talking about avian intelligence experiments in one of my cognitive science classes back in my misspent college days. Birds seem capable of single-step problem solving, but incapable generally of multistep problem-solving (first do this, then do that), possibly as a result of an inability to make concrete predictions about the future and then to make decisions based on those predictions.

    Which, to be fair, is a pretty tough thing to do, and the fact that we and other animals do it so effortlessly really says something about how amazing and sophisticated our cognitive abilities are.

  28. It’s going a bit far, I think, to say a neuron is an analog of a processor.

    A neuron is more sophisticated than a transistor in that a neuron, even though it has only two states, changes states based on more than one input, whereas a transistor (typically) changes states based on a single input. Neurons have a number of dendrites, and may fire based on input from multiple dendrites at once (spatial summing) or based on repeated input from a single dendrite over a short span of time (temporal summing), so it’s actually performing a “calculation” in the sense that the threshold of activation is a function of the summation of multiple inputs. But it still has only a single output and only a couple of finite states.

    A similar type of system can be built from multiple transistors, and the advantage of a transistor network is that it is much more efficient; not only are neurons painfully slow, but each state change is accompanied by a refractory period during which the neuron can not change states again even if the correct input condition is reached. Because transistors operate more efficiently, and about nine orders of magnitude more quickly, it’s not too much of a stretch to say that a smaller number of transistors is, in theory, sufficient to emulate the behavior of a much larger number of neurons. The details, of course, are in the implementation.

  29. “Recognizing that another creature is eating something is right up there at the top of the league standings — along with recognizing that another creature wants to kill you — as something that brains have been selected for.”

    True dat. Though I think many prey animals have taken a shortcut on the latter by assuming that everything is trying to kill them, and behaving accordingly. Cuts down on the processing cycles, y’know.

  30. At one level of abstraction you can think of neurons as doing a weighted combination of inputs and occasionally firing. But that’s a pretty high level of abstraction. A single neuron (like any cell) is a complex structured system changing in response to, and putting out, a variety of chemicals which in turn affect neighboring (and distant cells), on a range of time scales.

    Would you have to simulate every molecule at the quantum level to get an accurate approxmation to brain function? Almost certainly not. Could you do it by simulating a electrical network of bistable linear threshold units? I very much doubt it, but you’ve got me curious about how much that level of abstraction still is used in neuroscience.

  31. i have a similar story, but with a bit of a twist as the food in question was something the kitty learned she very much wanted. my sister and i were eating strawberries which we were dipping in a tub of cool whip…mmm. the kitten in question snuck up and put a paw into the cool whip, ran away on three paws, and tasted it. she was most delighted with the taste and wanted more. my sister, horrified that cat paw had been dipped into the tub, was guarding the tasty treat from further kitten sampling. the kitty tried from different angles, but was finding (from my observation) that to run up, stop, pull up a paw and dunk, took entirely too long. to my glee and great amusement the kitty came up with a clever solution to take away as much of the whipped cream as quickly as possibly. the kitty took another quick run at the tub, but instead of stopping and lifting the paw, she slammed her face into the container and then ran. i fell over with laughter for the kitty’s ingenuity and my sister’s expression of complete horror.

      • yeah, at one moment your “that was so funny” then “wow, it slammed it’s face in the cool whip?” then “well, it was the best way to take away as much as possible” then “omg it slammed it’s face in the cool whip!” then eyeing the cool whip container that has been violated…

        • You know, I think that may be the first time I have ever heard “cool whip” and “violated” used in the same sentence, which, given my particular tastes and kinks, is a bit surprising. I’ll need to do something about that oversight.

          • that is surprising. one word of advice, frozen is no good. especially with a muff diver. it is certainly not a good substitution for whip cream in that scenario.

  32. i have a similar story, but with a bit of a twist as the food in question was something the kitty learned she very much wanted. my sister and i were eating strawberries which we were dipping in a tub of cool whip…mmm. the kitten in question snuck up and put a paw into the cool whip, ran away on three paws, and tasted it. she was most delighted with the taste and wanted more. my sister, horrified that cat paw had been dipped into the tub, was guarding the tasty treat from further kitten sampling. the kitty tried from different angles, but was finding (from my observation) that to run up, stop, pull up a paw and dunk, took entirely too long. to my glee and great amusement the kitty came up with a clever solution to take away as much of the whipped cream as quickly as possibly. the kitty took another quick run at the tub, but instead of stopping and lifting the paw, she slammed her face into the container and then ran. i fell over with laughter for the kitty’s ingenuity and my sister’s expression of complete horror.

  33. Would you have to simulate every molecule at the quantum level to get an accurate approxmation to brain function? Almost certainly not. Could you do it by simulating a electrical network of bistable linear threshold units? I very much doubt it, but you’ve got me curious about how much that level of abstraction still is used in neuroscience.

    I can’t speak for modern models; my last formal classwork in neurophysiology was in 1988, at which time the bistable model was very much in vogue. Since then, it’s been discovered that glial cells (specifically, astrocytes), once thought to be support cells uninvolved in the actual operation of the brain save for nourishing and holding neurons, may play a role in promoting and directing new patterns of connections.

    Right now, the strong AI community (tht is, the folks working on ‘true,’ general purpose machine intelligence, as opposed to the weak AI community, who work on expert systems, chess playing programs, and so on) is divided loosely into two camps–the “bottom up” and “top down” contingents.

    The bottom-up camp, which I personally believe is the approach more likely to succeed, says “Well, we already have examples of general-purpose intelligent systems. If we model these systems with sufficient fidelity, the model should exhibit the same traits, even if we don’t fully understand how the systems work.”

    The top-down guys say “It’s not the hardware that matters; it’s the behavior. If we extract and define precisely enough the behaviors that make up what we call “intelligence,” then we can implement those behaviors without caring about the details of the implementation.”

    Now, I personally have a strong suspicion that general intelligence is an emergent phenomenon, and that the top-down guys are unlikely to get very far in trying to break it down into discrete processes. However, the top-down guys do make one very strong point, and that is it is not practical (or indeed even possible) to emulate a brain all the way down to a quantum level. Of necessity, our models are simplified; they do not include every single detail of the real brain. But if we do not know for certain which attributes of a brain are most important for intelligence, we can not be sure that by simplifying the model, we’re not omitting some element vital to intelligence. (This point is made very strongly by the fact that glial cells, which until recently have not been thought essential and have not been part of most attempts to model a brain, may play a crucial role in the formation of long-term memory.)

    Still, as you say, I don’t think it’s necessary to model the brain clear down to the subatomic level. And if the model itself behaves as it should, then it doesn’t matter that the substrate of the model is running on hardware composed of bistable transistors. Brain cells may be more sophisticated than this, but if they, and their patterns of interconnections, are still Turing machines, then it doesn’t matter.

  34. yeah, at one moment your “that was so funny” then “wow, it slammed it’s face in the cool whip?” then “well, it was the best way to take away as much as possible” then “omg it slammed it’s face in the cool whip!” then eyeing the cool whip container that has been violated…

  35. You know, I think that may be the first time I have ever heard “cool whip” and “violated” used in the same sentence, which, given my particular tastes and kinks, is a bit surprising. I’ll need to do something about that oversight.

  36. that is surprising. one word of advice, frozen is no good. especially with a muff diver. it is certainly not a good substitution for whip cream in that scenario.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.