Update 9 on the Bionic Dildo: Lots of progress!

A few folks have been wondering where we’re at on the Bionic Dildo, as we’ve taken to calling it.

We’ve made a lot of progress in the last few months, starting with setting up a workspace for research, development, and testing. We’ve moved into the new space, where we have a lot of resources we didn’t have before.

The first few prototypes were put together by modifying existing sex toys. This crude approach was good enough to show us that the basic technology is sound, but the prototypes we built this way were limited, fragile, and rather uncomfortable to wear.

Since then, we’ve acquired a 3D printer and facilities for making ceramic molds to cast silicone. This allows us to create custom-designed silicone with electronics, sensors, and electrodes cast right in.

From 3D rendering to printed positive that we use to make a mold.
And yes, those are Lego bricks we’re using as a mold box!

We’ve 3D printed and made silicone test casts of the insertable part of the device. Here’s a test cast of the insertable with electrodes directly embedded in the cast, a huge improvement over our first few prototypes:

Right now, we’re moving into a development phase aimed at answering questions like:

  • How many sensors and electrodes do we need?
  • What’s the neural density of the inside wall of the vagina?
  • How much variability is there in sensitivity between different people, and between different parts of the inner anatomy of the same person?
  • What’s the best way to modulate the signal in response to pressure on the sensors?
  • What’s the maximum perceptual spatial resolution of the inner anatomy?

The first-generation prototype had three sensors and three electrodes, and the insertable part was rigid plastic, which as you can imagine was not terribly comfortable and certainly not workable for long-term use. The prototype we’re working on now is an enormous improvement: fifteen sensors and fifteen electrodes, embedded in custom silicone that’s far more comfortable.

We’re excited with the progress that we’ve made, and looking forward to what we can learn in 2017.

Want to keep up with developments? Here’s a handy list of blog posts about it:
First post
Update 1
Update 2
Update 3
Update 4
Update 5
Update 6
Update 7
Update 8
Update 9

Learning to be a Human

I don’t live in my body.

I was 48 years old before I discovered this. Now, such a basic fact, you might think, would be intuitively obvious much earlier. But I’ve only (to my knowledge) been alive this once, and I haven’t had the experience of living as anyone else, so I think I might be forgiven for not fully understanding the extent to which my experience of the world is not everyone’s experience of the world.

Ah, if only we could climb behind someone else’s eyes and feel the world the way they do.

Anyway, I do not live in my body. My perception of my self—my core essence, if you will—is a ball that floats somewhere behind my eyes, and is carried about by my body.

Oh, I feel my body. It relays sensory information to me. I am aware of hot and cold (especially cold; more on that in a bit), soft and hard, rough and smooth. I feel the weight of myself pressing down on my feet. I am aware of the fact that I occupy space, and of my position in space. (Well, at least to some extent. My sense of direction is a bit rubbish, as anyone who’s known me for more than a few months can attest.)

But I don’t live in my body. It’s an apparatus, a biological machine that carries me around. “Me” is the sphere floating just behind my eyes.

And as I said, I didn’t even know this until I was 48.

This is not, as it turns out, my only perceptual anomaly.

I also perceive cold as pain.

When I say this, a lot of folks don’t really understand what I mean. I do not mean that cold is uncomfortable. I mean that cold is painful. An ice cube on my bare skin hurts. A lot. A cold shower is excruciating agony, and I’m not being hyperbolic when I say this. (Being wet is unpleasant under the best of circumstances. Cold water is pure agony. Worse than stubbing a toe, almost on par with touching a hot burner.)

I’ve always more or less assumed that other people perceive cold more or less the same way I do. There’s a trope that cold showers are an antidote to unwanted sexual arousal; I’d always thought that was because the pain shocks you out of any kind of sexy head space. And swimming in ice water? That was something that a certain breed of hard-core masochist did. Some folks like flesh hook suspension; some folks swim in ice water. Same basic thing.

I’ve only recently become aware that there’s actually a medical term for this latter condition: congenital thermal allodynia. It’s an abnormal coding of pain, and it is, I think, related to the not-living-in-my-body thing.

I probably would have discovered all of this if I’d been interested in recreational drug use as a youth. And it appears there may be a common factor in both of these atypical ways I perceive the world.

Ladies and gentlebeings, I present to you: TRPA1.

This is TRPA1. It’s a complex protein that acts as a receptor in nerve and other cells. It responds to cold and to the presence of certain chemicals (menthol feels cold because it activates this receptor). Variations on the structure of TRPA1 are implicated in a range of abnormal perception of pain; there’s a single nucleotide polymorphism in the gene that codes for TRPA1, for instance, that results in a medical condition called “hereditary episodic pain syndrome,” whose unfortunate sufferers are wracked by intermittent spasms of agonizing and debilitating pain, often triggered by…cold.

I’ve lived this way my entire life, completely unaware that it’s not the way most folks experience the world. It wasn’t until I started my first tentative explorations down the path of recreational pharmaceuticals that I discovered there was any other way to be.

For nearly all of my life, I’ve never had the slightest interest in recreational drug use, despite what certain of my relatives believed when I was a teenager. Aside from alcohol, I had zero experience with recreational pharmaceuticals until I was in my late 40s.

The first recreational drug I ever tried was psilocybin mushrooms. I’ve had several experiences with them now, which have universally been quite pleasant and agreeable.

But it’s the aftereffects of a mushroom trip that are, for me, the really interesting part.

The second time I tried psilocybin mushrooms, about an hour or so after the comedown from the mushroom trip, I had the sudden and quite marked experience of completely inhabiting my body. For the first time in my entire life, I wasn’t a ball of self being carried around by this complex meat machine; I was living inside my body, head to toe.

The effect of being-in-my-bodyness persisted for a couple of hours after all the other traces of the drug trip had gone, and for a person who’s spent an entire lifetime being carried about by a body but not really being in that body, I gotta say, man, it was amazing.

So I did what I always do: went on Google Scholar and started reading neurobiology papers.

My first hypothesis, born of vaguely remembered classes in neurobiology many years ago and general folk wisdom about psilocybin and other hallucinogens, was that the psilocybin (well, technically, psilocin, a metabolite of psilocybin) acted as a particularly potent serotonin agonist, dramatically increasing brain activity, particularly in the pyramidal cells in layer 5 of the brain. If psilocybin lowered the activation threshold of these cells, reasoned I, then perhaps I became more aware of my body because I was better able to process existing sensory stimulation from the peripheral nervous system, and/or better able to integrate my somatosensory perception. It sounds plausible, right? Right?

Alas, some time on Google Scholar deflated that hypothesis. It turns out that the conventional wisdom about how hallucinogens work is quite likely wrong.

Conventional wisdom is that hallucinogens promote neural activity in cells that express serotonin receptors by mimicking the action of serotonin, causing the cells to fire. Hallucinogens aren’t well understood, but it’s looking like this model is probably not correct.

Oh, don’t get me wrong, psilocybin is a serotonin agonist and it does lower activation threshold of pyramidal cells, oh yes.

The fly in the ointment is that evidence from fMRI and BOLD studies shows an overall inhibition of brain activity resulting from psilocybin. Psilocybin promotes activation of excitatory pyramidal cells, sure, but it also promotes activation of inhibitory GABAergic neurons, resulting in overall decreased activity in several other parts of the brain. Further, this activity in the pyramidal cells produces less overall cohesion of brain activity, as this paper from the Proceedings of the National Academy of Sciences explains. (It’s a really interesting article. Go read it!)

My hypothesis that psilocybin promotes the subjective experience of greater somatosensory integration by lowering activation threshold of pyramidal cells, therefore, seems suspect, unless perhaps we were to further hypothesize that this lowered activation threshold persisted after the mushroom trip was over, an assertion for which I can find no support in the literature.

So lately I’ve been thinking about TRPA1.

I drink a lot of tea. Not as much, perhaps, as my sweetie , but a lot nonetheless.

Something I learned a long time ago is that the sensation of being wet is extremely unpleasant, but it’s more tolerable after I’ve had my morning tea. I chalked that down to it being more unpleasant when I was sleepy than when I was awake.

It turns out caffeine is a mild TRPA1 inhibitor. That leads to the hypothesis that for all these years, I may have been self-medicating with caffeine without being aware of it. If TRPA1 is implicated in the more unpleasant somatosensory bits of being me, then caffeine may jam up the gubbins and let me function in a way that’s a closer approximation to the way other folks perceive the world. (Insert witty quip about not being fully human before my morning tea here.)

So then I started to wonder, what if psilocybin is connecting me with my body by influencing TRPA1 activity? Could that explain the aftereffects of a mushroom trip? When I’m in my body, I feel warm and, for lack of a better word, glowy. My sense of self extends downward and outward until it fills up the entire biological machine in which I live. Would TRPA1 inhibition explain that?

Google Scholar offers exactly fuckall on the effects of psilocybin on TRPA1. So I turned to other searches, trying to find other drugs or substances that promoted a subjective experience of greater connection with one’s own body.

I found anecdotal reports of what I was after from people who used N-phenylacetyl-L-prolylglycine ethyl ester, a supplement developed in Russia and sold as a cognitive enhancer under the Russian name Ноопепт and the English name Noopept. It’s widely sold as a nootropic. New Agers and the fringier elements of the transhumanist movement, two groups I tend not to put a lot of faith in, tout it as a brain booster.

Still, noopept is cheap and easily available, and I figured as long as I was experimenting with my brain’s biochemistry, it was worth a shot.

To hear tell, this stuff will do everything from make you smarter to prevent Alzheimer’s. Real evidence that it does much of anything is thin on the ground, with animal models showing some protective effect against some forms of brain trauma but human trials being generally small and unpersuasive.

I started taking it, and noticed absolutely no difference at all. Still, animal models suggest it takes quite a long time to have maximum effect, so I kept taking it.

About 40 days after I started, I woke up with the feeling of being completely in my body. It didn’t last long, but over the next few weeks, it came and went several times, typically for no more than an hour or two at a time.

But oh, what an hour. When you’ve lived your whole life as a ball being carted around balanced atop a bipedal biological machine, feeling like you inhabit your body is amazing.

The last time it happened, I was in the Adventure Van driving toward the cabin where I am currently writing not one, not two, but three books (a nonfiction followup to More Than Two titled Love More, Be Awesome, and two fiction books set in a common world, called Black Iron and Gold Gold Gold!). We were listening to music, as we often do when we travel, and I…felt the music. In my body.

I’d always more or less assumed that people who talk about “feeling music” were being metaphorical, not literal. Imagine my surprise.

I also noticed something intriguing: Feeling cold will, when I’m in my body, push me right back out again. Hence my hypothesis that not being connected with my body might in some way be related to TRPA1.

The connection with my body, intermittent and tenuous for the past few weeks, has disappeared again. I’m still taking noopept, but I haven’t felt like I’m inhabiting my body for the past couple of weeks. That leads to one of two suppositions: the noopept is not really doing anything at all, which is quite likely, or I’m developing a tolerance for noopept, which seems less likely but I suppose is possible. Noopept is a racetam-like peptide; like members of the racetam class, it is an acetylcholine agonist, and while I can’t find anything in the literature about noopept tolerance, tolerance of other acetylcholine agonists (though not, as near as I can tell, racetam-like acetylcholine agonists) has been observed in animal models.

So there’s that.

The literature on all of this has been decidedly unhelpful. I like the experience of completely inhabiting my body, and would love to find a way to do this all the time.

I’m currently pondering three experiments. First, next time I take mushrooms (and my experience with mushrooms, limited though they are, have universally been incredibly positive; while I have no desire to take them regularly, I probably will take them again at some point in the future), I am planning to set up experiments after the comedown where I expose myself to water and cold sensations to see if the pain is reduced or eliminated in the phase during which I’m connected to my body.

Second, I’m planning to discontinue noopept for a month or so, then resume it to see if the problem is tolerance.

I’m fifty years old and I’m still learning how to be a human being. Life is a remarkable thing.

Call to the Interwebs: Looking for experts!

Most of the folks reading my blog are probably familiar with the high tech sex toy my partner Eve and I are working on. Essentially, we’re making a strap-on covered with sensors, that uses direct neural stimulation to allow the wearer to feel touch and pressure on the strap-on.

We’ve built several prototypes that validate the basic idea, and we’re excited to move into the next phase of development.

To that end, we need your help! We’re looking for two things:

1. A person skilled with molding silicone who is willing to work with us to do one-off and two-off custom castings that integrate sensors, electrodes, and electronics into the casting.

This person will know a great deal about custom-molding silicone and be willing to work with us with some fairly exotic requirements, like molding silicone with electrodes embedded in the surface.

2. A skilled electronics person with knowledge of RF analog electronics. I know digital electronics, and so far, the prototypes we’ve built have used electronics and firmware I’ve written. But I’m a bit rubbish with the electronics stuff. Specifically, what we need is someone who can design circuitry that can be controlled by an embedded microcontroller and can modulate the amplitude of an analog signal based on input from pressure sensors. Imagine a signal generator that produces a signal something like this:

What we’re looking for is someone who can design a circuit that will modulate the amplitude of this signal in proportion to the input from pressure sensors…but, naturally, the human body being what it is, the correspondence is logarithmic, not linear (hence a programmable microcontroller doing the work fo figuring out how strong the signal needs to be).

We do have a budget for accomplishing these tasks. It’s not a huge budget, mind you; we’re a small startup, and that’s how it goes with small startups.

If you are interested or know anyone who might be, please let me know! You can reach me at franklin (at) tacitpleasures (dot) com.

Want to keep up with developments? Here’s a handy list of blog posts about it:
First post
Update 1
Update 2
Update 3
Update 4
Update 5
Update 6
Update 7
Update 8

#WLAMF no. 16: Lego brains

The brain is a fiendishly complicated thing. Not so much because all its constituent parts are complicated (though they can be), but because it’s a network of billions of components wired together with trillions of connections. Well, at least your brain is.

There are other brains that are a lot simpler. When I was taking classes in neurobiology, back in my misspent college days, we used to talk a lot about the species of worm called C. elegans.

Back then, researchers were just beginning to map its brain. The brains of C. elegans are isomorphic, meaning they’re all the same. (That’s not true of more sophisticated animals; our brains grow organically, with neurons wiring up to other neurons in a dynamic process that means even identical twins don’t have the same brains.) They’re small (about 300 neurons, and around 7,000 connections.) They’re easy to understand, at least for folks who find neurobiology “easy.”

And now they’ve been replicated in a Lego scooter that, well…behaves a lot like C. elegans without being explicitly programmed to. The robot has no pre-programmed behaviors; it acts like a roundworm because, in a sense, it has the brain of a roundworm.

And I think that’s really cool.


I’m writing one blog post for every contribution to our crowdfunding we receive between now and the end of the campaign. Help support indie publishing! We’re publishing five new books on polyamory in 2015: https://www.indiegogo.com/projects/thorntree-press-three-new-polyamory-books-in-2015/x/1603977

Some thoughts on machine learning: context-based approaches

A nontrivial problem with machine learning is organization of new information and recollection of appropriate information in a given circumstance. Simple storing of information (cats are furry, balls bounce, water is wet) is relatively straightforward, and one common approach to doing this is simply to define the individual pieces of knowledge as objects which contain things (water, cats, balls) and descriptors (water is wet, water flows, water is necessary for life; cats are furry, cats meow, cats are egocentric little psychopaths).

This presents a problem with information storage and retrieval. Some information systems that have a specific function, such as expert systems that diagnose illness or identify animals, solve this problem by representing the information hierarchically as a tree, with the individual units of information at the tree’s branches and a series of questions representing paths through the tree. For instance, if an expert system identifies an animal, it might start with the question “is this animal a mammal?” A “yes” starts down one side of the tree, and a “no” starts down the other. At each node in the tree, another question identifies which branch to take—”Is the animal four-legged?” “Does the animal eat meat?” “Does the animal have hooves?” Each path through the tree is a series of questions that leads ultimately to a single leaf.

This is one of the earliest approaches to expert systems, and it’s quite successful for representing hierarchical knowledge and for performing certain tasks like identifying animals. Some of these expert systems are superior to humans at the same tasks. But the domain of cognitive tasks that can be represented by this variety of expert system is limited. Organic brains do not really seem to organize knowledge this way.

Instead, we can think of the organization of information in an organic brain as a series of individual facts that are context dependent. In this view, a “context” represents a particular domain of knowledge—how to build a model, say, or change a diaper. There may be thousands, tens of thousands, or millions of contexts a person can move within, and a particular piece of information might belong to many contexts.

What is a context?

A context might be thought of as a set of pieces of information organized into a domain in which those pieces of information are relevant to each other. Contexts may be procedural (the set of pieces of information organized into necessary steps for baking a loaf of bread), taxonomic (a set of related pieces of information arranged into a hierarchy, such as knowledge of the various birds of North America), hierarchical (the set of information necessary for diagnosing an illness), or simply related to one another experientially (the set of information we associate with “visiting grandmother at the beach).

Contexts overlap and have fuzzy boundaries. In organic brains, even hierarchical or procedural contexts will have extensive overlap with experiential contexts—the context of “how to bake bread” will overlap with the smell of baking bread, our memories of the time we learned to bake bread, and so on. It’s probably very, very rare in an organic brain that any particular piece of information belongs to only one context.

In a machine, we might represent this by creating a structure of contexts CX (1,2,3,4,5,…n) where each piece of information is tagged with the contexts it belongs to. For instance, “water” might appear in many contexts: a context called “boating,” a context called “drinking,” a context called “wet,” a context called “transparent,” a context called “things that can kill me,” a context called “going to the beach,” and a context called “diving.” In each of these contexts, “water” may be assigned different attributes, whose relevance is assigned different weights based on the context. “Water might cause me to drown” has a low relevance in the context of “drinking” or “making bread,” and a high relevance in the context of “swimming.”

In a contextually based information storage system, new knowledge is gained by taking new information and assigning it correctly to relevant contexts, or creating new contexts. Contexts themselves may be arranged as expert systems or not, depending on the nature of the context. A human doctor diagnosing illness might have, for instance, a diagnostic context that behaves similarly in some ways to the way a diagnostic expert system; a doctor might ask a patient questions about his symptoms, and arrive at her conclusion by following the answers to a single possible diagnosis. This process might be informed by past contexts, though; if she has just seen a dozen patients with norovirus, her knowledge of those past diagnoses, her understanding of how contagious norovirus is, and her observation of the similarity of this new patient’s symptoms to those previous patients’ symptoms might allow her to bypass a large part of the decision tree. Indeed, it is possible that a great deal of what we call “intuition” is actually the ability to make observations and use heuristics that allow us to bypass parts of an expert system tree and arrive at a leaf very quickly.

But not all types of cognitive tasks can be represented as traditional expert systems. Tasks that require things like creativity, for example, might not be well represented by highly static decision trees.

When we navigate the world around us, we’re called on to perform large numbers of cognitive tasks seamlessly and to be able to switch between them effortlessly. A large part of this process might be thought of as context switching. A context represents a domain of knowledge and information—how to drive a car or prepare a meal—and organic brains show a remarkable flexibility in changing contexts. Even in the course of a conversation over a dinner table, we might change contexts dozens of times.

A flexible machine learning system needs to be able to switch contexts easily as well, and deal with context changes resiliently. Consider a dinner conversation that moves from art history to the destruction of Pompeii to a vacation that involved climbing mountains in Hawaii to a grandparent who lived on the beach. Each of these represents a different context, but the changes between contexts aren’t arbitrary. If we follow the normal course of conversations, there are usually trains of thought that lead from one subject to the next; and these trains of thought might be represented as information stored in multiple contexts. Art history and Pompeii are two contexts that share specific pieces of information (famous paintings) in common. Pompeii and Hawaii are contexts that share volcanoes in common. Understanding the organization of individual pieces of information into different contexts is vital to understanding the shifts in an ordinary human conversation; where we lack information—for example, if we don’t know that Pompeii was destroyed by a volcano—the conversation appears arbitrary and unconnected.

There is a danger in a system being too prone to context shifts; it meanders endlessly, unable to stay on a particular cognitive task. A system that changes contexts only with difficulty, on the other hand, appears rigid, even stubborn. We might represent focus, then, in terms of how strongly (or not) we cling to whatever context we’re in. Dustin Hoffman’s character in Rain Man possesses a cognitive system that clung very tightly to the context he was in!

Other properties of organic brains and human knowledge might also be represented in terms of information organized into contexts. Creativity is the ability to find connections between pieces of information that normally exist in different contexts, and to find commonalities of contextual overlap between them. Perception is the ability to assign new information to relevant contexts easily.

Representing contexts in a machine learning system is a nontrivial challenge. It is difficult, to begin with, to determine how many contexts might exist. As a machine entity gains new information and learns to perform new cognitive tasks, the number of contexts in which it can operate might increase indefinitely, and the system must be able to assign old information to new contexts as it encounters them. If we think of each new task we might want the machine learning system to be able to perform as a context, we need to devise mechanisms by which old information can be assigned to these new contexts.

Organic brains, of course, don’t represent information the way computers do. Organic brains represent information as neural traces—specific activation pathways among collections of neurons.

These pathways become biased toward activation when we are in situations similar to those where they were first formed, or similar to situations in which they have been previously activated. For example, when we talk about Pompeii, if we’re aware that it was destroyed by a volcano, other pathways pertaining to our experiences with or understanding of volcanoes become biased toward activation—and so, for example, our vacation climbing the volcanoes in Hawaii come to mind. When others share these same pieces of information, their pathways similarly become biased toward activation, and so they can follow the transition from talking about Pompeii to talking about Hawaii.

This method of encoding and recalling information makes organic brains very good at tasks like pattern recognition and associating new information with old information. In the process of recalling memories or performing tasks, we also rewrite those memories, so the process of assigning old information to new contexts is transparent and seamless. (A downside of this approach is information reliability; the more often we access a particular memory, the more often we rewrite it, so paradoxically, the memories we recall most often tend to be the least reliable.)

Machine learning systems need a system for tagging individual units of information with contexts. This becomes complex from an implementation perspective when we recall that simply storing a bit of information with descriptors (such as water is wet, water is necessary for life, and so on) is not sufficient; each of those descriptors has a value that changes depending on context. Representing contexts as a simple array CX (1,2,3,4,…n) and assigning individual facts to contexts (water belongs to contexts 2, 17, 43, 156, 287, and 344) is not sufficient. The properties associated with water will have different weights—different relevancies—depending on the context.

Machine learning systems also need a mechanism for recognizing contexts (it would not do for a general purpose machine learning system to respond to a fire alarm by beginning to bake bread) and for following changes in context without becoming confused. Additionally, contexts themselves are hierarchical; if a person is driving a car, that cognitive task will tend to override other cognitive tasks, like preparing notes for a lecture. Attempting to switch contexts in the middle of driving can be problematic. Some contexts, therefore, are more “sticky” than others, more resistant to switching out of.

A context-based machine learning system, then, must be able to recognize context and prioritize contexts. Context recognition is itself a nontrivial problem, based on recognition of input the system is provided with, assignment of that input to contexts, and seeking the most relevant context (which may in most situations be the context with greatest overlap with all the relevant input). Assigning some cognitive tasks, such as diagnosing an illness, to a context is easy; assigning other tasks, such as natural language recognition, processing, and generation in a conversation, to a context is more difficult to do. (We can view engaging in natural conversation as one context, with the topics of the conversation belonging to sub-contexts. This is a different approach than that taken by many machine conversational approaches, such as Markov chains, which can be viewed as memoryless state machines. Each state, which may correspond for example to a word being generated in a sentence, can be represented by S(n), and the transition from S(n) to S(n+1) is completely independent of S(n-1); previous parts of the conversation are not relevant to future parts. This creates limitations, as human conversations do not progress this way; previous parts of a conversation may influence future parts.)

Context seems to be an important part of flexibility in cognitive tasks, and thinking of information in terms not just of object/descriptor or decision trees but also in terms of context may be an important part of the next generation of machine learning systems.

Sex tech: Update on the dildo you can feel

A few months back, I wrote a blog post about a brain hack that might create a dildo the wearer can actually feel. The idea came to me in the shower. I’d been thinking about the brain’s plasticity, and about how it might be possible to trick the brain into internalizing a somatosensory perception that a strap-on dildo is a real part of the body, by using sensors along the dildo connected to tiny electrical stimulation pads worn inside the vagina.

It’s an interesting idea, I think. So I blogged about it. I didn’t expect the response I got.

I’ve received a bunch of emails about it, and had a bunch of people tell me “OMG this is the most amazing thing ever! Make it happen!”

So I have, between work on getting the book More Than Two out the door and preparing for the book tour, been chugging away at this idea. Here’s an update:

1. I’ve filed for a patent on the idea. I’ve received confirmation that the application has been accepted and the process is started.

2. I’ve talked to an electronics prototyping firm about developing a prototype. Based on feedback from the prototyping firm, I’ve modified the initial design extensively. The first version I’d thought about was based on the same principle as the Feeldoe; the redesign uses a separate dildo and harness, with an external computer to receive signals from the sensors in the dildo and transmit them to the vaginal insert. The new design looks, and works, something like this. (Apologies for the horrible animated GIF; art isn’t really my specialty.)

3. The prototyping firm has outlined a multi-step process to develop a workable, manufacturable device. The process would go something like:

Phase 1: Research and proof of concept. This would include researching designs for the sensors on the dildo and the electrodes on the vaginal insert. It would also include a crude proof-of-concept device that would essentially be nothing more than the vaginal insert connected to a computer programmed to simulate the rest of the device.

The intent at this stage is to see if the idea is even workable. What kind of electrodes could be used? Would the produce the right kind of stimulation? How densely arranged could they be? How small could they be? Would the brain actually be able to interpret sensations produced by the electrodes in a way that would trick the wearer into thinking the dildo was a part of the body? If so, how long would that somatosensory rewiring take?

Phase 2: Assuming the initial research showed the idea to be viable, the next step would be to figure out a sensor design, fabricate a microcontroller to connect the sensors to the electrodes, and experiment with sensor design and fabrication. Would a single sensor provide adequate range of tactile feedback, or would it be necessary to multiplex several sensors (some designed to respond to light touch, others to a heavier touch) together in order to provide a good dynamic range? What mechanical properties would the sensors need to have? How would they be built? (We talked about several potential designs, including piezoelectric, resistive polymer, and fluid-filled devices.) How would the sensors be placed along the dildo?

Phase 3: Once a working prototype is developed, the next step is detail design and engineering. This is essentially the process of taking a working prototype and producing a manufacturable product from it. This includes everything from engineering drawings for fabrication to choosing materials to developing the final version of the software.

So. That’s where the project is right now.

The up side? I think this thing could actually work. The down side? It’s going to be expensive.

I have already started investigating ways to make it happen. If we incorporate in Canada, we may be eligible for Canadian financial incentives designed to spur tech research and development.

The fabricating company seems to think the first phase would most likely cost somewhere around $5,000-10,000. Depending on what’s learned during that phase, the development of a fully functional prototype might run anywhere from $50,000 to $100,000, a lot of which hinges on design of the sensors, which will likely be the most challenging bit of engineering. They didn’t even want to speculate about the cost of going from working prototype to manufacturable product; too many unknowns.

I’m discussing the possibility of doing crowdfunding to get from phase 2 to 3, and possibly from phase 1 to 2. It’s not likely that crowdfunding is appropriate for the first phase, because I won’t have anything tangible to offer backers. Indeed, it’s possible that I might spend the initial money and discover the idea isn’t workable.

Ideally, I’d like to find people who think this idea is worth investigating who can afford to invest in the first phase. If you know anybody who might be interested in this project, let me know!

Also, one of the people at the prototyping company suggested the name “Hapdick.” I’m still not sure how I feel about that, but I do have to admit it’s clever.

Want to keep up with developments? Here’s a handy list of blog posts about it:
First post
Update 1
Update 2
Update 3
Update 4
Update 5
Update 6
Update 7
Update 8
Update 9

Sex Tech: Adopting the Brain’s Plasticity

Some while ago, I read an article about a gizmo made of a black and white video camera attached to a grid of electrodes. The idea is that you wear the electrodes on your tongue. Images from the video camera are converted into patterns of electric signals on the electrode, so you “see”–with your tongue–what the camera sees.

Early users of the prototype gizmo would wear a blindfold and then try to navigate around just by the electrical impulses on their tongues. What’s most interesting is not only were they able to do this, but they reported that, after a while, their memories were not of sensations on their tongues, but of seeing a fuzzy, black and white image.

The brain is wonderfully plastic, able to interpret new kinds of sensory input in amazing ways. It can rewire itself to accommodate the new input; in fact, the tongue-electrode thing is being commercialized as a device for the blind.

As I do, when i first heard about this, I naturally thought “how can this be used for sex?” And I think it has fantastic potential.


Imagine, if you will, a wearable dildo, rather like the Feeldoe, that’s designed to have one end inserted in the vagina. Only imagine that we take the same kind of electrodes used in the tongue-camera device, and send signals to the electrodes not from a video camera, but from small touch sensitive sensors mounted just below the skin of the dildo.

These sensors would be mapped onto the electrodes so that when something touches the sensor, you’d feel a corresponding signal from the corresponding electrode.

I’m not an artist, but I made a couple of crude animations to illustrate the idea:

What would happen?

I believe that after a period of adjustment, this dildo would be incorporated into the brain’s somatosensory perception. The brain would, in essence, modify its model of the body to accommodate the dildo–it would, rather quickly I suspect, cease to be perceived as a thing and become perceived as a part of the body. Stimulation of the dildo would begin to feel like stimulation of yourself.

And isn’t that an interesting idea.

The neural density in the walls of the vagina isn’t as great as the neural density of the tongue. I don’t think that’s a problem, though; the neural density of the shaft of the penis isn’t as great, either.

One potentially interesting twist on this notion is to map the most sensitive part of the penis, the underside just below the glans, onto the most sensitive part of the body–the clitoris. The sensors of the shaft would map onto electrodes in the bulb worn inside the vagina, except this part, which would map onto the clitoris–mapping the sensitivity of a natural penis.

Another potentially interesting thing to do is to make the sensors on the dildo pressure sensitive, with firmer touches creating stronger impulses from the electrodes.

Now, there’s a lot of experimentation between this idea and a real device. I don’t know the neural density in the walls of the vagina, but it would impose a limit on how many electrodes could be placed on the dildo. Would there be sufficient density to be able to create a fine tactile sense? I think the answer is probably “yes,” but I’m not sure.

I’m also not sure how much processing would be required. I’m guessing not much; certainly much less than is required with the vision sense. The tongue-vision thing is trying to do something far more complicated; it’s trying to register sufficient information to allow you to navigate a three-dimensional world. A circle seen by the camera might be a lollipop right in front of your face or a billboard far away; because the tongue has no way to represent stereo imagery, there’s no way to tell. So the processor has to allow the operator to be able to zoom in and out, to give the user a sense of how far away things might be. It has to be able to adjust to different lighting conditions.

The dildo, by way of contrast, merely has to respond to physical touch, which maps much more easily onto the array of electrodes. It’s pretty straightforward; if something’s not touching a particular sensor, its electrode isn’t producing a signal. The amount of processing might be low enough to allow the processor to be housed inside the dildo, making the device compact, and not requiring it to be tethered to any electronics.

I think this thing could be hella fun. It would allow people born with vaginas to have a remarkably good impression of what it’s like to be born with a penis.

In a world where I had infinite free time, I’d put together a crowdfunding campaign to try to build a working prototype. Even without infinite time, I’m considering doing this. Thoughts? Opinions?

Want to keep up with developments? Here’s a handy list of blog posts about it:
First post
Update 1
Update 2
Update 3

Some (More) Thoughts on Brain Modeling and the Coming Geek Rapture

The notion of “uploading”–analyzing a person’s brain and then modeling it, neuron by neuron, in a computer, thereby forever preserving that person’s knowledge and consciousness–is a fixture of transhumanist thought. In fact, self-described “futurists” like Ray Kurzweil will gladly expound at great length about how uploading and machine consciousness are right around the corner, and Any Day Now we will be able to live forever by copying ourselves into virtual worlds.

I’ve written extensively before about why I think that’s overly optimistic, and why Ray Kurzweil pisses me off. Our understanding of the brain is still remarkably poor–for example, we’re only just now learning how brain cells called “glial cells” are involved in the process of cognition–and even when we do understand the brain on a much deeper level, the tools for being able to map the connections between the cells in the brain are still a long way off.

In that particular post, I wrote that I still think brain modeling will happen; it’s just a long way off.

Now, however, I’m not sure it will ever happen at all.


I love cats.

Many people love cats, but I really love cats. It’s hard for me to see a cat when I’m out for a walk without wanting to make friends with it.

It’s possible that some of my love of cats isn’t an intrinsic part of my personality, in the sense that my personality may have been modified by a parasite commonly found in cats.

This is the parasite, in a color-enhanced scanning electron micrograph. Pretty, isn’t it? It’s called Toxoplasma gondii. It’s a single-celled organism that lives its life in two stages, growing to maturity inside the bodies of rats, and reproducing in the bodies of cats.

When a rat is infected, usually by coming into contact with cat droppings, the parasite grows but doesn’t reproduce. Its reproduction can only happen in a cat, which becomes infected when it eats an infected rat.

To help ensure its own survival, the parasite does something amazing. It controls the rat’s mind, exerting subtle changes to make the rat unafraid of cats. Healthy rats are terrified of cats; if they smell any sign of a cat, even a cat’s urine, they will leave an area and not come back. Infected rats lose that fear, which serves the parasite’s needs by making it more likely the rat will be eaten by a cat.

Humans can be infected by Toxoplasma gondii, but we’re a dead end for the parasite; it can’t reproduce in us.

It can, however, still work its mind-controlling magic. Infected humans show a range of behavioral changes, including becoming more generous and less bound by social mores and customs. They also appear to develop an affinity for cats.

There is a strong likelihood that I am a Toxoplasma gondii carrier. My parents have always owned cats, including outdoor cats quite likely to have been exposed to infected rats. So it is quite likely that my love for cats, and other, more subtle aspects of my personality (bunny ears, anyone?), have been shaped by the parasite.

So, here’s the first question: If some magical technology existed that could read the connections between all of my brain cells and copy them into a computer, would the resulting model act like me? If the model didn’t include the effects of Toxoplasma gondii infection, how different would that model be from who I am? Could you model me without modeling my parasites?


It gets worse.

The brain models we’ve built to date are all constructed from generic building blocks. We model neurons as though they are variations on a common theme, responding pretty much the same way. These models assume that the neurons in Alex’s head behave pretty much the same way as the neurons in Bill’s head.

To some extent, that’s true. But we’re learning that there can be subtle genetic differences in the way that neurons respond to different neurotransmitters, and these subtle differences can have very large effects on personality and behavior.

Consider this protein. It’s a model of a protein called AVPR-1a, which is used in brain cells as a receptor for the neurotransmitter called vasopressin.

Vasopressin serves a wide variety of different functions. In the body, it regulates water retention and blood pressure. In the brain, it regulates pair-bonding, stress, aggression, and social interaction.

A growing body of research shows that human beings naturally carry slightly different forms of the gene that produce this particular receptor, and that these tiny genetic differences result in tiny structural differences in the receptor which produce quite significant differences in behavior. For example, one subtle difference in the gene that produces this receptor changes the way that men bond to partners after sex; carriers of this particular genetic variation are less likely to experience intense pair-bonding, less likely to marry, and more likely to divorce if they do marry.

A different variation in this same gene produces a different AVPR-1a receptor that is strongly linked to altruistic behavior; people with that particular variant are far more likely to be generous and altruistic, and the amount of altruism varies directly with the number of copies of a particular nucleotide sequence within the gene.

So let’s say that we model a brain, and the model we use is built around a statistical computation for brain activation based on the most common form of the AVPR-1a gene. If we model the brain of a person with a different form of this gene, will the model really represent her? Will it behave the way she does?

The evidence suggests that, no, it won’t. Because subtle genetic variations can have significant behavioral consequences, it is not sufficient to upload a person using a generic model. We have to extend the model all the way down to the molecular level, modeling tiny variations in a person’s receptor molecules, if we wish to truly upload a person into a computer.

And that leads rise to a whole new layer of thorny moral issues.


There is a growing body of evidence that suggests that autism spectrum disorders are the result in genetic differences in neuron receptors, too. The same PDF I linked to above cites several studies that show a strong connection between various autism-spectrum disorders and differences in receptors for another neurotransmitter, oxytocin.

Vasopressin and oxytocin work together in complex ways to regulate social behavior. Subtle changes in production, uptake, and response to either or both can produce large, high-level changes in behavior, and specifically in interpersonal behavior–arguably a significant part of what we call a person’s “personality.”

So let’s assume a magic brain-scanning device able to read a person’s brain state and a magic computer able to model a person’s brain. Let’s say that we put a person with Asperger’s or full-blown autism under our magic scanner.

What do we do? Do we build the model with “normal” vasopressin and oxytocin receptors, thereby producing a model that doesn’t exhibit autism-spectrum behavior? If we do that, have we actually modeled that person, or have we created an entirely new entity that is some facsimile of what that person might be like without autism? Is that the same person? Do we have a moral imperative to model a person being uploaded as closely as possible, or is it more moral to “cure” the autism in the model?


In the previous essay, I outlined why I think we’re still a very long ways away from modeling a person in a computer–we lack the in-depth understanding of how the glial cells in the brain influence behavior and cognition, we lack the tools to be able to analyze and quantify the trillions of interconnections between neurons, and we lack the computational horsepower to be able to run such a simulation even if we could build it.

Those are technical objections. The issue of modeling a person all the way down to the level of genetic variation in neurotransmitter and receptor function, however, is something else.

Assuming we overcome the limitations of the first round of problems, we’re still left with the fact that there’s a lot more going on in the brain than generic, interchangeable neurons behaving in predictable ways. To actually copy a person, we need to be able to account for genetic differences in the structure of receptors in the brain…

…and even if we do that, we still haven’t accounted for the fact that organisms like Toxoplasma gondii can and do change the behavior of the brain to suit their own ends. (I would argue that a model of me that was faithful clear down to the molecular level probably wouldn’t be a very good copy if it didn’t include the effects that the parasite have had on my personality–effects that we still have no way to quantify.)

Sorry, Mr. Kurzweil, we’re not there yet, and we’re not likely to be any time soon. Modeling a specific person in a brain is orders of magnitude harder than you think it is. At this point, I can’t even say with certainty that I think it will ever happen.

Why We’re All Idiots: Credulity, Framing, and the Entrenchment Effect

The United States is unusual among First World nations in the sense that we only have two political parties.

Well, technically, I suppose we have more, but only two that matter: Democrats and Republicans. They are popularly portrayed in American mass media as “liberals” and “conservatives,” though that’s not really true; in world terms, they’re actually “moderate conservatives” and “reactionaries.” A serious liberal political party doesn’t exist; when you compare the Democratic and Republican parties, you see a lot of across-the-board agreement on things like drug prohibition (both parties largely agree that recreational drug use should be outlawed), the use of American military might abroad, and so on.

A lot of folks mistakenly believe that this means there’s no real differences between the two parties. This is nonsense, of course; there are significant differences, primarily in areas like religion (where the Democrats would, on a European scale, be called “conservatives” and the Republicans would be called “radicalists”); social issues like sex and relationships (where the Democrats tend to be moderates and the Republicans tend to be far right); and economic policy (where Democrats tend to be center-right and Republicans tend to be so far right they can’t tie their left shoe).

Wherever you find people talking about politics, you find people calling the members of the opposing side “idiots.” Each side believes the other to be made up of morons and fools…and, to be fair, each side is right. We’re all idiots, and there are powerful psychological factors that make us idiots.


The fact that we think of Democrats as “liberal” and Republicans as “conservative” illustrates one ares where Republicans are quite different from Democrats: their ability to frame issues.

The American political landscape for the last three years by a great deal of shouting and screaming over health care reform.

And the sentence you just read shows how important framing is. Because, you see, we haven’t actually been discussing health care reform at all.

Despite all the screaming, and all the blogging, and all the hysterical foaming on talk radio, and all the arguments online, almost nobody has actually read the legislation signed after much wailing and gnashing into law by President Obama.

And if you do read it, there’s one thing about it that may jump to your attention: It isn’t about health care at all. It barely even talks about health care per se. It’s actually about health insurance. It provides a new framework for health insurance legislation, it restricts health insurance companies’ ability to deny coverage on the basis of pre-existing conditions, it seeks to make insurance more portable..in short, it is health insurance reform, not health care reform. The fact that everyone is talking about health care reform is a tribute to the power of framing.


In any discussion, the person who controls how the issue at question is shaped controls the debate. Control the framing and you can control how people think about it.

Talking about health care reform rather than health insurance reform leads to an image in people’s minds of the government going into a hospital operatory or a doctor’s exam room and telling the doctor what to do. Talking about health insurance reform gives rise to mental images of government beancounters arguing with health insurance beancounters about the proper way to notate an exemption to the requirements for filing a release of benefits form–a much less emotionally compelling image.

Simply by re-casting “health insurance reform” as “health care reform,” the Republicans created the emotional landscape on which the war would be fought. Middle-class working Americans would not swarm to the defense of the insurance industry and its über-rich executives. Recast it as government involvement between a doctor and a patient, however, and the tone changed.

Framing matters. Because people, by and large, vote their identity rather than their interests, if you can frame an issue in a way that appeals to a person’s sense of self, you can often get him to agree with you even if by agreeing with you he does harm to himself.

I know a woman who is an atheist, non-monogamous, bisexual single mom who supports gay marriage. In short, she hits just about every ticky-box in the list of things that “family values” Republicans hate. The current crop of Republican political candidates, all of them, have at one point or another voiced their opposition to each one of these things.

Yet she only votes Republican. Why? Because she says she believes, as the Republicans believe, that poor people should just get jobs instead of lazing about watching TV and sucking off hardworking taxpayers’ labor.

That’s the way we frame poverty in this country: poor people are poor because they are just too lazy to get a fucking job already.

That framing is extraordinarily powerful. It doesn’t matter that it has nothing to do with reality. According to the US Census Bureau, as of December 2011 46,200,000 Americans (or 15.1% of the total population) live in poverty. According to the US Department of Labor, 11.7% of the total US population had employment but were still poor. In other words, the vast majority of poor people have jobs–especially when you consider that some of the people included in the Census Bureau’s statistics are children, and therefore not part of the labor force.

Framing the issue of poverty as “lazy people who won’t get a job” helps deflect attention away from the real causes of poverty, and also serves as a technique to manipulate people into supporting positions and policies that act against their own interests.

But framing only works if you do it at the start. Revealing how someone has misleadingly framed a discussion after it has begun is not effective at changing people’s attention because of a cognitive bias called the entrenchment effect.


A recurring image in US politics is the notion of the “welfare queen”–a hypothetical person, invariably black, who becomes wealthy by living on government subsidies. The popular notion has this black woman driving around the low-rent neighborhood in a Cadillac, which she bought by having dozens and dozens of babies so that she could receive welfare checks for each one.

The notion largely traces back to Ronald Reagan, who during his campaign in 1976 talked over and over (and over and over and over and over) about a woman in Chicago who used various aliases to get rich by scamming huge amounts of welfare payments from the government.

The problem is, this person didn’t exist. She was entirely, 100% fictional. The notion of a “welfare queen” doesn’t even make sense; having a lot of children but subsisting only on welfare doesn’t increase your standard of living, it lowers it. The extra benefits given to families with children do not entirely offset the costs of raising children.

Leaving aside the overt racism in the notion of the “welfare queen” (most welfare recipients are white, not black), a person who thinks of welfare recipients this way probably won’t change his mind no matter what the facts are. We all like to believe ourselves to be rational; we believe we have adopted our ideas because we’ve considered the available information rationally, and that if evidence that contradicts our ideas is presented, we will evaluate it rationally. But nothing could be further from the truth.

In 2006, two researchers at the University of Michigan, Brendan Nyhan and Jason Reifler, did a study in which they showed people phony studies or articles supporting something that the subjects believed. They then told the subjects that the articles were phony, and provided the subjects with evidence that showed that their beliefs were actually false.

The result: The subjects became even more convinced that their beliefs were true. In fact, the stronger the evidence, the more insistently the subjects clung to their false beliefs.

This effect, which is now referred to as the “entrenchment effect” or the “backfire effect,” is very common among people in general. A person who holds a belief who is shown hard physical evidence that the belief is false comes away with an even stronger belief that it is true. The stronger the evidence, the more firmly the person holds on.

The entrenchment effect is a form of “motivated reasoning.” Generally speaking, what happens is that a person who is confronted with a piece of evidence showing that his beliefs are wrong will respond by mentally going through all the reasons he started holding that belief in the first place. The stronger the evidence, the more the person repeats his original line of reasoning. The more the person rehearses the original reasoning that led him to the incorrect belief, the more he believes it to be true.

This is especially true if the belief has some emotional vibrancy. There is a part of the brain called the amygdala which is, among other things, a kind of “emotional memory center.” That’s a bit oversimplified, but essentially true; when you recall a memory that has an emotional charge, the amygdala mediates your recall of the emotion that goes along with the memory; you feel that emotion again. When you rehearse the reasons you first subscribed to your belief, you re-experience the emotions again–reinforcing it and making it feel more compelling.

This isn’t just a right/left thing, either.

Say, for example, you’re afraid of nuclear power. A lot of people, particularly self-identified liberals, are. If you are presented with evidence that shows that nuclear power, in terms of human deaths per terawatt-hour of power produced, is by far the safest of all forms of power generation, it is unlikely to change your mind about the dangers of nuclear power one bit.

The most dangerous form of power generation is coal. In addition to killing tens of thousands of people a year, mostly because of air pollution, coal also releases quite a lot of radiation into the environment. This radiation comes from two sources. First, some of the carbon that coal is made of is in the naturally occurring radioactive isotope carbon-14; when the coal is burned, this combines with oxygen to produce radioactive gas that goes out the smokestack. Second, coal beds contain trace amounts of radioactive uranium and thorium, which remain in the ash when it’s burned; coal plants consume so much coal–huge freight trains of it–that the resulting fly ash left over from burning those millions of tons of coal is more radioactive than nuclear waste. So many people die directly or indirectly as a result of coal-fired power generation that if we had a Chernobyl-sized meltdown every four years, it would STILL kill fewer people than coal.

If you’re afraid of nuclear power, that argument didn’t make a dent in your beliefs. You mentally went back over the reasons you’re afraid of nuclear power, and your amygdala reactivated your fear…which in turn prevented you from seriously considering the idea that nuclear might not be as dangerous as you feel it is.

If you’re afraid of socialism, then arguments about health reform won’t affect you. It won’t matter to you that health care reform is actually health insurance reform, or that the supposed “liberal” health care reform law was actually mostly written by Republicans (many of the health insurance reforms in the Federal package are modeled on similar laws written by none other than Mitt Romney; the provisions expanding health coverage for children were written by Republican senator Orrin Hatch (R-Utah); and the expansion of the Medicare drug program were written by Republican Representative Dennis Hastert (R-Illinois)), or that it’s about as Socialist as Goldman-Sachs (the law does not nationalize hospitals, make doctors into government employees, or in any other way socialize the health care infrastructure). You will see this information, you will think about the things that originally led you to see the Republican health-insurance reform law as “socialized Obamacare,” and you’ll remember your emotional reaction while you do it.

Same goes for just about any argument with an emotional component–gun control, abortion, you name it.

This is why folks on both sides of the political divide think of one another as “idiots.” That person who opposes nuclear power? Obviously an idiot; only an idiot could so blindly ignore hard, solid evidence about the safety of nuclear power compared to any other form of power generation. Those people who hate Obamacare? Clearly they’re morons; how else could they so easily hang onto such nonsense as to think it was written by Democrats with the purpose of socializing medicine?

Clever framing allows us to be led to beliefs that we would otherwise not hold; once there, the entrenchment effect keeps us there. In that way, we are all idiots. Yes, even me. And you.

Transhumanism, Technology, and the da Vinci Effect

[Note: There is a followup to this essay here]

Ray Kurzweil pisses me off.

His name came up last night at Science Pub, which is a regular event, hosted by a friend of mine, that brings in guest speakers on a wide range of different science and technology related topics to talk in front of an audience at a large pub. There’s beer and pizza and really smart scientists talking about things they’re really passionate about, and if you live in Portland, Oregon (or Eugene or Hillsboro; my friend is branching out), I can’t recommend them enough.

Before I can talk about why Ray Kurzweil pisses me off–or, more precisely, before I can talk about some of the reasons Ray Kurzweil pisses me off, as an exhaustive list would most surely strain my patience to write and your patience to read–it is first necessary to talk about what I call the “da Vinci effect.”


Leonardo da Vinci is, in my opinion, one of the greatest human beings who has ever lived. He embodies the best in our desire to learn; he was interested in painting and sculpture and anatomy and engineering and just about every other thing worth knowing about, and he took time off of creating some of the most incredible works of art the human species has yet created to invent the helicopter, the armored personnel carrier, the barrel spring, the Gatling gun, and the automated artillery fuze…pausing along the way to record innovations in geography, hydraulics, music, and a whole lot of other stuff.

However, most of his inventions, while sound in principle, were crippled by the fact that he could not conceive of any power source other than muscle power. The steam engine was still more than two and a half centuries away; the internal combustion engine, another half-century or so after that.

da Vinci had the ability to anticipate the broad outlines of some really amazing things, but he could not build them, because he lacked one essential element whose design and operation were way beyond him or the society he lived in, both in theory and in practice.

I tend to call this the “da Vinci effect”–the ability to see how something might be possible, but to be missing one key component that’s so far ahead of the technology of the day that it’s not possible even to hypothesize, except perhaps in broad, general terms, how it might work, and not possible even to anticipate with any kind of accuracy how long it might take before the thing becomes reachable.


Charles Babbage’s Difference Engine is another example of an idea whose realization was held back by the da Vinci effect.

Babbage reasoned–quite accurately–that it was possible to build a machine capable of mathematical computation. He also reasoned that it would be possible to construct such a machine in such a way that it could be fed a program–a sequence of logical steps, each representing some operation to carry out–and that on the conclusion of such a program, the machine would have solved a problem. Ths last bit differentiated his conception of a computational engine from other devices (such as the Antikythera mechanism) which were built to solve one particular problem and could not be programmed.

The technology of the time, specifically with respect to precision metal casting, meant his design for a mechanical computer was never realized in his lifetime. Today, we use devices that operate by principles he imagined every day, but they aren’t mechanical; in place of gears and levers, they use gates that control the flow of electrons–something he could never have envisioned given the understanding of his time.


One of the speakers at last night’s Science Pub was Dr. Larry Sherman, a neurobiologist and musician who runs a research lab here in Oregon that’s currently doing a lot of cutting-edge work in neurobiology. He’s one of my heroes1; I’ve seen him present several times now, and he’s a fantastic speaker.

Now, when I was in school studying neurobiology, things were very simple. You had two kinds of cells in your brain: neurons, which did all the heavy lifting involved in the process of cognition and behavior, and glial cells, which provided support for the neurons, nourished them, repaired damage, and cleaned up the debris from injury or dead cells.

There are a couple of broad classifications for glial cells: astrocytes and microglia. Astrocytes, shown in green in this picture, provide a physical scaffold to hold neurons (in blue) in place. They wrap the axons of neurons in protective sheaths and they absorb nutrients and oxygen from blood vessels, which they then pass on to the neurons. Microglia are cells that are kind of like little amoebas; hey swim around in your brain locating dead or dying cells, pathogens, and other forms of debris, and eating them.

So that’s the background.


Ray Kurzweil is a self-styled “futurist,” transhumanist, and author. He’s also a Pollyanna with little real rubbber-on-road understanding of the challenges that nanotechnology and biotechnology face. He talks a great deal about AI, human/machine interface, and uploading–the process of modeling a brain in a computer such that the computer is conscious and aware, with all the knowledge and personality of the person being modeled.

He gets a lot of it wrong, but it’s the last bit he gets really wrong. Not the general outlines, mind you, but certainly the timetable. He’s the guy who looks at da Vinci’s notebook and says “Wow, a flying machine? That’s awesome! Look how detailed these drawings are. I bet we could build one of these by next spring!”

Anyway, his name came up during the Q&A at Science Pub, and I kind of groaned. Not as much as I did when Dr. Sherman suggested that a person whose neurons had been replaced with mechanical analogues wouldn’t be a person any more, but I groaned nonetheless.

Afterward, I had a chance to talk to Dr. Sherman briefly. The conversation was short; only just long enough for him to completely blow my mind, make me believe that a lot of ideas about uploading are limited by the da Vinci effect, and to suggest that much brain modeling research currently going on is (in his words) “totally wrong”.


It turns out that most of what I was taught about neurobiology was utterly wrong. Our understanding of the brain has exploded in the last few decades. We’ve learned that people can and do grow new brain cells all the time, throughout their lives. And we’ve learned that the glial cells do a whole lot more than we thought they did.

Astrocytes, long believed to be nothing but scaffolding and cafeteria workers, are strongly implicated in learning and cognition, as it turns out. They not only support the neurons in your brain, but they guide the process of new neural connections, the process by which memory and learning work. They promote the growth of new neural pathways, and they also determine (at least to some degree) how and where those new pathways form.

In fact, human beings have more different types of astrocytes than other vertebrates do. Apparently, according to my brief conversation with Dr. Sherman, researchers have taken human astrocytes and implanted them in developing mice, and discovered an apparent increase in cognitive functions of those mice even though the neurons themselves were no different.

And, more recently, it turns out that microglia–the garbage collectors and scavengers of the brain–can influence high-order behavior as well.

The last bit is really important, and it involves hox genes.


A quick overview of hox genes. These are genes which control the expression of other genes, and which are involved in determining how an organism’s body develops. You (and monkeys and mice and fruit flies and earthworms) have hox genes–pretty much the same hox genes, in fact–that represent an overall “body image plan”. The do things like say “Ah, this bit will become a torso, so I will switch on the genes that correspond to forming arms and legs here, and switch off the genes responsible for making eyeballs or toes.” Or “This bit is the head, so I will switch on the eyeball-forming genes and the mouth-forming genes, and switch off the leg-forming genes.”

Mutations to hox genes generally cause gross physical abnormalities. In fruit flies, incoreect hox gene expression can cause the fly to sprout legs instead of antennae, or to grow wings from strange parts of its body. In humans, hox gene malfunctions can cause a number of really bizarre and usually fatal birth defects–growing tiny limbs out of eye sockets, that sort of thing.

And it appears that a hox gene mutation can result in obsessive-compulsive disorder.

And more bizarrely than that, this hox gene mutation affects the way microglia form.


Think about how bizarre that is for a minute. The genes responsible for regulating overall body plan can cause changes in microglia–little amoeba scavengers that roam around in the brain. And that change to those scavengers can result in gross high-level behavioral differences.

Not only are we not in Kansas any more, we’re not even on the same continent. This is absolutely not what anyone would expect, given our knowledge of the brain even twenty years ago.

Which brings us back ’round to da Vinci.


Right now, most attempts to model the brain look only at the neurons, and disregard the glial cells. Now, there’s value to this. The brain is really (really really really) complex, and just developing tools able to model billions of cells and hundreds or thousands of billions of interconnections is really, really hard. We’re laying the foundation, even with simple models, that lets us construct the computational and informatics tools for handling a problem of mind-boggling scope.

But there’s still a critical bit missing. Or critical bits, really. We’re missing the computational bits that would allow us to model a system of this size and scope, or even to be able to map out such a system for the purpose of modeling it. A lot of folks blithely assume Moore’s Law will take care of that for us, but I’m not so sure. Even assuming a computer of infinite power and capability, if you want to upload a person, you still have the task of being able to read the states and connection pathways of many billions of very small cells, and I’m not convinced we even know quite what those tools look like yet.

But on top of that, when you consider that we’re missing a big part of the picture of how cognition happens–we’re looking at only one part of the system, and the mechanism by which glial cells promote, regulate, and influence high-level cognitive tasks is astonishingly poorly understood–it becomes clear (at least to me, anyway) that uploading is something that isn’t going to happen soon.

We can, like da Vinci, sketch out the principles by which it might work. There is nothing in the laws of physics that suggest it can’t be done, and in fact I believe that it absolutely can and will, eventually, be done.

But the more I look at the problem, the more it seems to me that there’s a key bit missing. And I don’t even think we’re in a position yet to figure out what that key bit looks like, much less how it can be built. It may be possible that when we do model brains, the model isn’t going to look anything like what we think of as a conventional computer at all, much like when we built general-purpose programmable devices, they didn’t look like Babbage’s difference engines at all.


1 Or would be, if it weren’t for the fact that he rejects personhood theory, which is something I’m still a bit surprised by. If I ever have the opportunity to talk with him over dinner, I want to discuss personhood theory with him, oh yes.