Fear on the Left and the Right

“If you’re conservative, you’re fearful. Socially conservative ideas are driven by fear.”

This is the conclusion of social psychology, backed by peer-reviewed, published studies and fMRI research. Neurologists can tell you with a high degree of probability whether a person is liberal or conservative just by looking at brain scans1. Conservatives tend to have a larger amygdala, which mediates threat and fear, and a smaller anterior cingulate cortex, a part of the brain responsible for resolving conflict and detecting deviances between what you expect to see and what you actually see.2

That’s pretty well established in the neurobiology community, but…

I would like to propose it’s oversimplified. In my experience and observation, liberals and conservatives both tend to be fearful, with political ideologies driven by fear; it’s just that conservatives are frightened of people, and liberals are frightened of things.

First, a bit of background.

The amygdala is a small structure in the brain. It’s occasionally described as a memory center” of the brain, but that’s not really true. It regulates emotional association. If you’re near a cave, and a leopard springs out of the cave and devours your friend in front of you, your memories of that cave will be associated with fear. That’s the job (simplifying a bit) of the amygdala.

Image: RobinH at en.wikibooks from Commons, cropped and resaved in PNG format, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=5228021

PTSD is essentially the amygdala doing what it’s designed to. If your friend gets devoured by a leopard that springs from a cave, you should be afraid of that cave. That fear has survival value. Our ancestors who weren’t, didn’t survive.

The amygdala in conservatives tends to be larger than that of liberals, suggesting greater propensity to recall emotional associations of memories. The notion that liberals are emotional and conservatives are rational is not supported by science; reality seems to be quite the opposite.

Anyway, fMRI studies suggest that social conservatives experience greater amygdala activation in social situations, are more sensitive to potential threats,3 and have greater in-group/out-group sensitivity than liberals. Conservatives are more likely to see people different from themselves as frightening and more likely to see the world in tribal, us-vs-them terms.

The conclusion from these studies is “conservatives are more fearful.” And if you look at racism, sexism, homophobia, transphobia, and so on, all of which are more prevalent on the American right than the left, that makes sense.

But there’s more to fear than just fear of people.

Something I haven’t seen, but I’d love to, is fMRI scans and brain studies of liberals and conservatives when shown things rather than people that evoke fear. It’s easy to say that conservatives are hypersensitive to fearful stimuli when they’re shown pictures of people, but what explains the political divide when it comes to fear of, for example, nuclear power?

Nuclear power is one of the safest forms of large-scale power generation known to man, with a human-deaths-per-terawatt-hour-of-energy record that puts it well ahead of almost everything else. The safest forms of power generation are nuclear, wind, and solar, with nuclear power thousands of times safer than fossil fuel power generation.4

If you read that and the first thing you think is “But waste! But Chernobyl! But radiation!”, then you are rehearsing, a mechanism by which the brain clings to ideas that you believe are true in the face of evidence to the contrary. Rehearsing is the core mechanism of the “entrenchment effect” or the “backfire effect,” a system where a person who sees evidence that something they believe is wrong will come to believe the wrong idea even more strongly…and the stronger the evidence against the idea, the more firmly the belief becomes entrenched in the believer’s mind.

If you’re a liberal reading this, and you sneer at conservatives who continue to insist that Donald Trump is not an abuser or sexual assaulter in spite of the reams of evidence in the Epstein Files, while at the same time clinging to fear of nuclear power, well, maybe you have a better understanding of what those conservatives are going through, because you’re doing it too.

The point here isn’t to talk about nuclear power, but to say that there’s more to irrational fear responses than fear of people. Brain studies that conclude conservatives are more fearful than liberals tend to look at threats from people; I think there might be something to the idea that liberals and conservatives are both fearful, and their fear responses might originate in structural differences in the brain, but they are afraid of different things.

Liberals and conservatives are also, I think, highly susceptible to propaganda that reinforces their fears. Conservatives respond strongly to propaganda that reflects vertical hierarchies (“The Hatians are coming to eat your dogs and cats! Mexicans are rapists and murderers!”), while liberals are more receptive to propaganda that emphasizes outside forces attempting to dominate or control society or implement hierarchy or power (“Big Pharma is taking away your access to natural cures!” “Agricultural businesses are using plant patents to control your food supply!”)

I’d love to see more research on this; “conservatives are fearful and liberals are not” seems too pat to me, and doesn’t match my observations.


[1] Scientific American, Conservative and Liberal Brains Might Have Some Real Differences

[2] Political Orientations Are Correlated with Brain Structure in Young Adults

[3] Red brain, blue brain: evaluative processes differ in Democrats and Republicans

[4] Earth.org: Nuclear & the Rest: Which Is the Safest Energy Source?

Let’s Dance! Some Thoughts on Being Embodied

If you could move inside my head, you’d…well, honestly, you’d probably find the experience a little disconcerting, because who does that? Moving into someone else would likely be unsettling no matter who you did it to, unless they were, like, an identical twin or something.

But if you could move inside my head, you’d probably find it especially unsettling, because I don’t live in my body. People assume that a body is something you live in, but actually, from an entirely subjective viewpoint, my sense of self is more a big ball of wibbly-wobbly…stuff. I am, most of the time, a ball that floats behind my eyes and operates my body like one of those mecha things in a certain genre of Japanese science fiction. A meat mecha. A meat mecha made of flesh and bone and bizarre squishy biology.

But this isn’t an essay about that. It’s an essay about dancing.

I like dancing. I enjoy dancing. Some years ago, I started getting into partner dancing. My wife and my crush are both avid, skilled, talented dancers, so they were, as oyu might imagine, thrilled at the idea I might extend my repertoir beyond goth/industrial dancing at a certain flavor of loud, frenetic nightclub.

There is, however, as you might imagine, a difficulty that comes from not living in one’s body. Learning to dance is a bit like learning to make a marionette dance; when you’re operating a meat mecha made of biology and fluids, getting it to do exactly what you want it to do is a bit of a challenge.

I learned through a rather strange set of circumstances some time ago that psilocybin mushrooms can, for brief moments, make me inhabit my body. The first time that happened, it was…um, startling. When you’re accustomed to living life as an invisible ball floating somewhere behind your eyes, operating a meat mecha by remote control, the sensation that you reach alllll the way to the ground is jarring.

Then, when I burned my foot and learned that opiate painkillers do nothing but make me puke profusely and exuberantly, but cannabis edibles actually work for pain management, I discovered that edibles also put me into my body, which was wonderful because, you know, inhabiting one’s body without hallucinating is a marvelous thing.

So it came to pass that Joreth offered to take me swing dancing a few nights back, and I thought, hey, I wonder if it will be easier to learn a new dance if I’m inhabiting my body?

Morgan Freeman voice: “It was, in fact, easier to learn a new dance when he was inhabiting his body.

The entire experience was, for lack of a better word, extraordinary. It’s far easier, as it turns out, to learn how to move one’s feet when one’s sense of self extends all the way to the floor. I don’t think I’ve ever caught on to something new in…well, in ever.

I mean, don’t get me wrong, it helps that Joreth is the best teacher I’ve ever had. But still, never underestimate the power of living entirely within your body, rather than operating your body the way you might a particularly fiddly meat-robot.

Interestingly, when the edible started to wear off and I shrunk back into that ball behind my eyes, she could tell immediately. (Her, mid-dance: “You’re becoming a ball again, aren’t you?”)

Anyway, the whole experiment turned out to be a resounding success, one I definitely hope to continue exploring again in the future.

Update 9 on the Bionic Dildo: Lots of progress!

A few folks have been wondering where we’re at on the Bionic Dildo, as we’ve taken to calling it.

We’ve made a lot of progress in the last few months, starting with setting up a workspace for research, development, and testing. We’ve moved into the new space, where we have a lot of resources we didn’t have before.

The first few prototypes were put together by modifying existing sex toys. This crude approach was good enough to show us that the basic technology is sound, but the prototypes we built this way were limited, fragile, and rather uncomfortable to wear.

Since then, we’ve acquired a 3D printer and facilities for making ceramic molds to cast silicone. This allows us to create custom-designed silicone with electronics, sensors, and electrodes cast right in.

From 3D rendering to printed positive that we use to make a mold.
And yes, those are Lego bricks we’re using as a mold box!

We’ve 3D printed and made silicone test casts of the insertable part of the device. Here’s a test cast of the insertable with electrodes directly embedded in the cast, a huge improvement over our first few prototypes:

Right now, we’re moving into a development phase aimed at answering questions like:

  • How many sensors and electrodes do we need?
  • What’s the neural density of the inside wall of the vagina?
  • How much variability is there in sensitivity between different people, and between different parts of the inner anatomy of the same person?
  • What’s the best way to modulate the signal in response to pressure on the sensors?
  • What’s the maximum perceptual spatial resolution of the inner anatomy?

The first-generation prototype had three sensors and three electrodes, and the insertable part was rigid plastic, which as you can imagine was not terribly comfortable and certainly not workable for long-term use. The prototype we’re working on now is an enormous improvement: fifteen sensors and fifteen electrodes, embedded in custom silicone that’s far more comfortable.

We’re excited with the progress that we’ve made, and looking forward to what we can learn in 2017.

Want to keep up with developments? Here’s a handy list of blog posts about it:
First post
Update 1
Update 2
Update 3
Update 4
Update 5
Update 6
Update 7
Update 8
Update 9

Learning to be a Human

I don’t live in my body.

I was 48 years old before I discovered this. Now, such a basic fact, you might think, would be intuitively obvious much earlier. But I’ve only (to my knowledge) been alive this once, and I haven’t had the experience of living as anyone else, so I think I might be forgiven for not fully understanding the extent to which my experience of the world is not everyone’s experience of the world.

Ah, if only we could climb behind someone else’s eyes and feel the world the way they do.

Anyway, I do not live in my body. My perception of my self—my core essence, if you will—is a ball that floats somewhere behind my eyes, and is carried about by my body.

Oh, I feel my body. It relays sensory information to me. I am aware of hot and cold (especially cold; more on that in a bit), soft and hard, rough and smooth. I feel the weight of myself pressing down on my feet. I am aware of the fact that I occupy space, and of my position in space. (Well, at least to some extent. My sense of direction is a bit rubbish, as anyone who’s known me for more than a few months can attest.)

But I don’t live in my body. It’s an apparatus, a biological machine that carries me around. “Me” is the sphere floating just behind my eyes.

And as I said, I didn’t even know this until I was 48.

This is not, as it turns out, my only perceptual anomaly.

I also perceive cold as pain.

When I say this, a lot of folks don’t really understand what I mean. I do not mean that cold is uncomfortable. I mean that cold is painful. An ice cube on my bare skin hurts. A lot. A cold shower is excruciating agony, and I’m not being hyperbolic when I say this. (Being wet is unpleasant under the best of circumstances. Cold water is pure agony. Worse than stubbing a toe, almost on par with touching a hot burner.)

I’ve always more or less assumed that other people perceive cold more or less the same way I do. There’s a trope that cold showers are an antidote to unwanted sexual arousal; I’d always thought that was because the pain shocks you out of any kind of sexy head space. And swimming in ice water? That was something that a certain breed of hard-core masochist did. Some folks like flesh hook suspension; some folks swim in ice water. Same basic thing.

I’ve only recently become aware that there’s actually a medical term for this latter condition: congenital thermal allodynia. It’s an abnormal coding of pain, and it is, I think, related to the not-living-in-my-body thing.

I probably would have discovered all of this if I’d been interested in recreational drug use as a youth. And it appears there may be a common factor in both of these atypical ways I perceive the world.

Ladies and gentlebeings, I present to you: TRPA1.

This is TRPA1. It’s a complex protein that acts as a receptor in nerve and other cells. It responds to cold and to the presence of certain chemicals (menthol feels cold because it activates this receptor). Variations on the structure of TRPA1 are implicated in a range of abnormal perception of pain; there’s a single nucleotide polymorphism in the gene that codes for TRPA1, for instance, that results in a medical condition called “hereditary episodic pain syndrome,” whose unfortunate sufferers are wracked by intermittent spasms of agonizing and debilitating pain, often triggered by…cold.

I’ve lived this way my entire life, completely unaware that it’s not the way most folks experience the world. It wasn’t until I started my first tentative explorations down the path of recreational pharmaceuticals that I discovered there was any other way to be.

For nearly all of my life, I’ve never had the slightest interest in recreational drug use, despite what certain of my relatives believed when I was a teenager. Aside from alcohol, I had zero experience with recreational pharmaceuticals until I was in my late 40s.

The first recreational drug I ever tried was psilocybin mushrooms. I’ve had several experiences with them now, which have universally been quite pleasant and agreeable.

But it’s the aftereffects of a mushroom trip that are, for me, the really interesting part.

The second time I tried psilocybin mushrooms, about an hour or so after the comedown from the mushroom trip, I had the sudden and quite marked experience of completely inhabiting my body. For the first time in my entire life, I wasn’t a ball of self being carried around by this complex meat machine; I was living inside my body, head to toe.

The effect of being-in-my-bodyness persisted for a couple of hours after all the other traces of the drug trip had gone, and for a person who’s spent an entire lifetime being carried about by a body but not really being in that body, I gotta say, man, it was amazing.

So I did what I always do: went on Google Scholar and started reading neurobiology papers.

My first hypothesis, born of vaguely remembered classes in neurobiology many years ago and general folk wisdom about psilocybin and other hallucinogens, was that the psilocybin (well, technically, psilocin, a metabolite of psilocybin) acted as a particularly potent serotonin agonist, dramatically increasing brain activity, particularly in the pyramidal cells in layer 5 of the brain. If psilocybin lowered the activation threshold of these cells, reasoned I, then perhaps I became more aware of my body because I was better able to process existing sensory stimulation from the peripheral nervous system, and/or better able to integrate my somatosensory perception. It sounds plausible, right? Right?

Alas, some time on Google Scholar deflated that hypothesis. It turns out that the conventional wisdom about how hallucinogens work is quite likely wrong.

Conventional wisdom is that hallucinogens promote neural activity in cells that express serotonin receptors by mimicking the action of serotonin, causing the cells to fire. Hallucinogens aren’t well understood, but it’s looking like this model is probably not correct.

Oh, don’t get me wrong, psilocybin is a serotonin agonist and it does lower activation threshold of pyramidal cells, oh yes.

The fly in the ointment is that evidence from fMRI and BOLD studies shows an overall inhibition of brain activity resulting from psilocybin. Psilocybin promotes activation of excitatory pyramidal cells, sure, but it also promotes activation of inhibitory GABAergic neurons, resulting in overall decreased activity in several other parts of the brain. Further, this activity in the pyramidal cells produces less overall cohesion of brain activity, as this paper from the Proceedings of the National Academy of Sciences explains. (It’s a really interesting article. Go read it!)

My hypothesis that psilocybin promotes the subjective experience of greater somatosensory integration by lowering activation threshold of pyramidal cells, therefore, seems suspect, unless perhaps we were to further hypothesize that this lowered activation threshold persisted after the mushroom trip was over, an assertion for which I can find no support in the literature.

So lately I’ve been thinking about TRPA1.

I drink a lot of tea. Not as much, perhaps, as my sweetie , but a lot nonetheless.

Something I learned a long time ago is that the sensation of being wet is extremely unpleasant, but it’s more tolerable after I’ve had my morning tea. I chalked that down to it being more unpleasant when I was sleepy than when I was awake.

It turns out caffeine is a mild TRPA1 inhibitor. That leads to the hypothesis that for all these years, I may have been self-medicating with caffeine without being aware of it. If TRPA1 is implicated in the more unpleasant somatosensory bits of being me, then caffeine may jam up the gubbins and let me function in a way that’s a closer approximation to the way other folks perceive the world. (Insert witty quip about not being fully human before my morning tea here.)

So then I started to wonder, what if psilocybin is connecting me with my body by influencing TRPA1 activity? Could that explain the aftereffects of a mushroom trip? When I’m in my body, I feel warm and, for lack of a better word, glowy. My sense of self extends downward and outward until it fills up the entire biological machine in which I live. Would TRPA1 inhibition explain that?

Google Scholar offers exactly fuckall on the effects of psilocybin on TRPA1. So I turned to other searches, trying to find other drugs or substances that promoted a subjective experience of greater connection with one’s own body.

I found anecdotal reports of what I was after from people who used N-phenylacetyl-L-prolylglycine ethyl ester, a supplement developed in Russia and sold as a cognitive enhancer under the Russian name Ноопепт and the English name Noopept. It’s widely sold as a nootropic. New Agers and the fringier elements of the transhumanist movement, two groups I tend not to put a lot of faith in, tout it as a brain booster.

Still, noopept is cheap and easily available, and I figured as long as I was experimenting with my brain’s biochemistry, it was worth a shot.

To hear tell, this stuff will do everything from make you smarter to prevent Alzheimer’s. Real evidence that it does much of anything is thin on the ground, with animal models showing some protective effect against some forms of brain trauma but human trials being generally small and unpersuasive.

I started taking it, and noticed absolutely no difference at all. Still, animal models suggest it takes quite a long time to have maximum effect, so I kept taking it.

About 40 days after I started, I woke up with the feeling of being completely in my body. It didn’t last long, but over the next few weeks, it came and went several times, typically for no more than an hour or two at a time.

But oh, what an hour. When you’ve lived your whole life as a ball being carted around balanced atop a bipedal biological machine, feeling like you inhabit your body is amazing.

The last time it happened, I was in the Adventure Van driving toward the cabin where I am currently writing not one, not two, but three books (a nonfiction followup to More Than Two titled Love More, Be Awesome, and two fiction books set in a common world, called Black Iron and Gold Gold Gold!). We were listening to music, as we often do when we travel, and I…felt the music. In my body.

I’d always more or less assumed that people who talk about “feeling music” were being metaphorical, not literal. Imagine my surprise.

I also noticed something intriguing: Feeling cold will, when I’m in my body, push me right back out again. Hence my hypothesis that not being connected with my body might in some way be related to TRPA1.

The connection with my body, intermittent and tenuous for the past few weeks, has disappeared again. I’m still taking noopept, but I haven’t felt like I’m inhabiting my body for the past couple of weeks. That leads to one of two suppositions: the noopept is not really doing anything at all, which is quite likely, or I’m developing a tolerance for noopept, which seems less likely but I suppose is possible. Noopept is a racetam-like peptide; like members of the racetam class, it is an acetylcholine agonist, and while I can’t find anything in the literature about noopept tolerance, tolerance of other acetylcholine agonists (though not, as near as I can tell, racetam-like acetylcholine agonists) has been observed in animal models.

So there’s that.

The literature on all of this has been decidedly unhelpful. I like the experience of completely inhabiting my body, and would love to find a way to do this all the time.

I’m currently pondering three experiments. First, next time I take mushrooms (and my experience with mushrooms, limited though they are, have universally been incredibly positive; while I have no desire to take them regularly, I probably will take them again at some point in the future), I am planning to set up experiments after the comedown where I expose myself to water and cold sensations to see if the pain is reduced or eliminated in the phase during which I’m connected to my body.

Second, I’m planning to discontinue noopept for a month or so, then resume it to see if the problem is tolerance.

I’m fifty years old and I’m still learning how to be a human being. Life is a remarkable thing.

Call to the Interwebs: Looking for experts!

Most of the folks reading my blog are probably familiar with the high tech sex toy my partner Eve and I are working on. Essentially, we’re making a strap-on covered with sensors, that uses direct neural stimulation to allow the wearer to feel touch and pressure on the strap-on.

We’ve built several prototypes that validate the basic idea, and we’re excited to move into the next phase of development.

To that end, we need your help! We’re looking for two things:

1. A person skilled with molding silicone who is willing to work with us to do one-off and two-off custom castings that integrate sensors, electrodes, and electronics into the casting.

This person will know a great deal about custom-molding silicone and be willing to work with us with some fairly exotic requirements, like molding silicone with electrodes embedded in the surface.

2. A skilled electronics person with knowledge of RF analog electronics. I know digital electronics, and so far, the prototypes we’ve built have used electronics and firmware I’ve written. But I’m a bit rubbish with the electronics stuff. Specifically, what we need is someone who can design circuitry that can be controlled by an embedded microcontroller and can modulate the amplitude of an analog signal based on input from pressure sensors. Imagine a signal generator that produces a signal something like this:

What we’re looking for is someone who can design a circuit that will modulate the amplitude of this signal in proportion to the input from pressure sensors…but, naturally, the human body being what it is, the correspondence is logarithmic, not linear (hence a programmable microcontroller doing the work fo figuring out how strong the signal needs to be).

We do have a budget for accomplishing these tasks. It’s not a huge budget, mind you; we’re a small startup, and that’s how it goes with small startups.

If you are interested or know anyone who might be, please let me know! You can reach me at franklin (at) tacitpleasures (dot) com.

Want to keep up with developments? Here’s a handy list of blog posts about it:
First post
Update 1
Update 2
Update 3
Update 4
Update 5
Update 6
Update 7
Update 8

#WLAMF no. 16: Lego brains

The brain is a fiendishly complicated thing. Not so much because all its constituent parts are complicated (though they can be), but because it’s a network of billions of components wired together with trillions of connections. Well, at least your brain is.

There are other brains that are a lot simpler. When I was taking classes in neurobiology, back in my misspent college days, we used to talk a lot about the species of worm called C. elegans.

Back then, researchers were just beginning to map its brain. The brains of C. elegans are isomorphic, meaning they’re all the same. (That’s not true of more sophisticated animals; our brains grow organically, with neurons wiring up to other neurons in a dynamic process that means even identical twins don’t have the same brains.) They’re small (about 300 neurons, and around 7,000 connections.) They’re easy to understand, at least for folks who find neurobiology “easy.”

And now they’ve been replicated in a Lego scooter that, well…behaves a lot like C. elegans without being explicitly programmed to. The robot has no pre-programmed behaviors; it acts like a roundworm because, in a sense, it has the brain of a roundworm.

And I think that’s really cool.


I’m writing one blog post for every contribution to our crowdfunding we receive between now and the end of the campaign. Help support indie publishing! We’re publishing five new books on polyamory in 2015: https://www.indiegogo.com/projects/thorntree-press-three-new-polyamory-books-in-2015/x/1603977

Some thoughts on machine learning: context-based approaches

A nontrivial problem with machine learning is organization of new information and recollection of appropriate information in a given circumstance. Simple storing of information (cats are furry, balls bounce, water is wet) is relatively straightforward, and one common approach to doing this is simply to define the individual pieces of knowledge as objects which contain things (water, cats, balls) and descriptors (water is wet, water flows, water is necessary for life; cats are furry, cats meow, cats are egocentric little psychopaths).

This presents a problem with information storage and retrieval. Some information systems that have a specific function, such as expert systems that diagnose illness or identify animals, solve this problem by representing the information hierarchically as a tree, with the individual units of information at the tree’s branches and a series of questions representing paths through the tree. For instance, if an expert system identifies an animal, it might start with the question “is this animal a mammal?” A “yes” starts down one side of the tree, and a “no” starts down the other. At each node in the tree, another question identifies which branch to take—”Is the animal four-legged?” “Does the animal eat meat?” “Does the animal have hooves?” Each path through the tree is a series of questions that leads ultimately to a single leaf.

This is one of the earliest approaches to expert systems, and it’s quite successful for representing hierarchical knowledge and for performing certain tasks like identifying animals. Some of these expert systems are superior to humans at the same tasks. But the domain of cognitive tasks that can be represented by this variety of expert system is limited. Organic brains do not really seem to organize knowledge this way.

Instead, we can think of the organization of information in an organic brain as a series of individual facts that are context dependent. In this view, a “context” represents a particular domain of knowledge—how to build a model, say, or change a diaper. There may be thousands, tens of thousands, or millions of contexts a person can move within, and a particular piece of information might belong to many contexts.

What is a context?

A context might be thought of as a set of pieces of information organized into a domain in which those pieces of information are relevant to each other. Contexts may be procedural (the set of pieces of information organized into necessary steps for baking a loaf of bread), taxonomic (a set of related pieces of information arranged into a hierarchy, such as knowledge of the various birds of North America), hierarchical (the set of information necessary for diagnosing an illness), or simply related to one another experientially (the set of information we associate with “visiting grandmother at the beach).

Contexts overlap and have fuzzy boundaries. In organic brains, even hierarchical or procedural contexts will have extensive overlap with experiential contexts—the context of “how to bake bread” will overlap with the smell of baking bread, our memories of the time we learned to bake bread, and so on. It’s probably very, very rare in an organic brain that any particular piece of information belongs to only one context.

In a machine, we might represent this by creating a structure of contexts CX (1,2,3,4,5,…n) where each piece of information is tagged with the contexts it belongs to. For instance, “water” might appear in many contexts: a context called “boating,” a context called “drinking,” a context called “wet,” a context called “transparent,” a context called “things that can kill me,” a context called “going to the beach,” and a context called “diving.” In each of these contexts, “water” may be assigned different attributes, whose relevance is assigned different weights based on the context. “Water might cause me to drown” has a low relevance in the context of “drinking” or “making bread,” and a high relevance in the context of “swimming.”

In a contextually based information storage system, new knowledge is gained by taking new information and assigning it correctly to relevant contexts, or creating new contexts. Contexts themselves may be arranged as expert systems or not, depending on the nature of the context. A human doctor diagnosing illness might have, for instance, a diagnostic context that behaves similarly in some ways to the way a diagnostic expert system; a doctor might ask a patient questions about his symptoms, and arrive at her conclusion by following the answers to a single possible diagnosis. This process might be informed by past contexts, though; if she has just seen a dozen patients with norovirus, her knowledge of those past diagnoses, her understanding of how contagious norovirus is, and her observation of the similarity of this new patient’s symptoms to those previous patients’ symptoms might allow her to bypass a large part of the decision tree. Indeed, it is possible that a great deal of what we call “intuition” is actually the ability to make observations and use heuristics that allow us to bypass parts of an expert system tree and arrive at a leaf very quickly.

But not all types of cognitive tasks can be represented as traditional expert systems. Tasks that require things like creativity, for example, might not be well represented by highly static decision trees.

When we navigate the world around us, we’re called on to perform large numbers of cognitive tasks seamlessly and to be able to switch between them effortlessly. A large part of this process might be thought of as context switching. A context represents a domain of knowledge and information—how to drive a car or prepare a meal—and organic brains show a remarkable flexibility in changing contexts. Even in the course of a conversation over a dinner table, we might change contexts dozens of times.

A flexible machine learning system needs to be able to switch contexts easily as well, and deal with context changes resiliently. Consider a dinner conversation that moves from art history to the destruction of Pompeii to a vacation that involved climbing mountains in Hawaii to a grandparent who lived on the beach. Each of these represents a different context, but the changes between contexts aren’t arbitrary. If we follow the normal course of conversations, there are usually trains of thought that lead from one subject to the next; and these trains of thought might be represented as information stored in multiple contexts. Art history and Pompeii are two contexts that share specific pieces of information (famous paintings) in common. Pompeii and Hawaii are contexts that share volcanoes in common. Understanding the organization of individual pieces of information into different contexts is vital to understanding the shifts in an ordinary human conversation; where we lack information—for example, if we don’t know that Pompeii was destroyed by a volcano—the conversation appears arbitrary and unconnected.

There is a danger in a system being too prone to context shifts; it meanders endlessly, unable to stay on a particular cognitive task. A system that changes contexts only with difficulty, on the other hand, appears rigid, even stubborn. We might represent focus, then, in terms of how strongly (or not) we cling to whatever context we’re in. Dustin Hoffman’s character in Rain Man possesses a cognitive system that clung very tightly to the context he was in!

Other properties of organic brains and human knowledge might also be represented in terms of information organized into contexts. Creativity is the ability to find connections between pieces of information that normally exist in different contexts, and to find commonalities of contextual overlap between them. Perception is the ability to assign new information to relevant contexts easily.

Representing contexts in a machine learning system is a nontrivial challenge. It is difficult, to begin with, to determine how many contexts might exist. As a machine entity gains new information and learns to perform new cognitive tasks, the number of contexts in which it can operate might increase indefinitely, and the system must be able to assign old information to new contexts as it encounters them. If we think of each new task we might want the machine learning system to be able to perform as a context, we need to devise mechanisms by which old information can be assigned to these new contexts.

Organic brains, of course, don’t represent information the way computers do. Organic brains represent information as neural traces—specific activation pathways among collections of neurons.

These pathways become biased toward activation when we are in situations similar to those where they were first formed, or similar to situations in which they have been previously activated. For example, when we talk about Pompeii, if we’re aware that it was destroyed by a volcano, other pathways pertaining to our experiences with or understanding of volcanoes become biased toward activation—and so, for example, our vacation climbing the volcanoes in Hawaii come to mind. When others share these same pieces of information, their pathways similarly become biased toward activation, and so they can follow the transition from talking about Pompeii to talking about Hawaii.

This method of encoding and recalling information makes organic brains very good at tasks like pattern recognition and associating new information with old information. In the process of recalling memories or performing tasks, we also rewrite those memories, so the process of assigning old information to new contexts is transparent and seamless. (A downside of this approach is information reliability; the more often we access a particular memory, the more often we rewrite it, so paradoxically, the memories we recall most often tend to be the least reliable.)

Machine learning systems need a system for tagging individual units of information with contexts. This becomes complex from an implementation perspective when we recall that simply storing a bit of information with descriptors (such as water is wet, water is necessary for life, and so on) is not sufficient; each of those descriptors has a value that changes depending on context. Representing contexts as a simple array CX (1,2,3,4,…n) and assigning individual facts to contexts (water belongs to contexts 2, 17, 43, 156, 287, and 344) is not sufficient. The properties associated with water will have different weights—different relevancies—depending on the context.

Machine learning systems also need a mechanism for recognizing contexts (it would not do for a general purpose machine learning system to respond to a fire alarm by beginning to bake bread) and for following changes in context without becoming confused. Additionally, contexts themselves are hierarchical; if a person is driving a car, that cognitive task will tend to override other cognitive tasks, like preparing notes for a lecture. Attempting to switch contexts in the middle of driving can be problematic. Some contexts, therefore, are more “sticky” than others, more resistant to switching out of.

A context-based machine learning system, then, must be able to recognize context and prioritize contexts. Context recognition is itself a nontrivial problem, based on recognition of input the system is provided with, assignment of that input to contexts, and seeking the most relevant context (which may in most situations be the context with greatest overlap with all the relevant input). Assigning some cognitive tasks, such as diagnosing an illness, to a context is easy; assigning other tasks, such as natural language recognition, processing, and generation in a conversation, to a context is more difficult to do. (We can view engaging in natural conversation as one context, with the topics of the conversation belonging to sub-contexts. This is a different approach than that taken by many machine conversational approaches, such as Markov chains, which can be viewed as memoryless state machines. Each state, which may correspond for example to a word being generated in a sentence, can be represented by S(n), and the transition from S(n) to S(n+1) is completely independent of S(n-1); previous parts of the conversation are not relevant to future parts. This creates limitations, as human conversations do not progress this way; previous parts of a conversation may influence future parts.)

Context seems to be an important part of flexibility in cognitive tasks, and thinking of information in terms not just of object/descriptor or decision trees but also in terms of context may be an important part of the next generation of machine learning systems.

Sex tech: Update on the dildo you can feel

A few months back, I wrote a blog post about a brain hack that might create a dildo the wearer can actually feel. The idea came to me in the shower. I’d been thinking about the brain’s plasticity, and about how it might be possible to trick the brain into internalizing a somatosensory perception that a strap-on dildo is a real part of the body, by using sensors along the dildo connected to tiny electrical stimulation pads worn inside the vagina.

It’s an interesting idea, I think. So I blogged about it. I didn’t expect the response I got.

I’ve received a bunch of emails about it, and had a bunch of people tell me “OMG this is the most amazing thing ever! Make it happen!”

So I have, between work on getting the book More Than Two out the door and preparing for the book tour, been chugging away at this idea. Here’s an update:

1. I’ve filed for a patent on the idea. I’ve received confirmation that the application has been accepted and the process is started.

2. I’ve talked to an electronics prototyping firm about developing a prototype. Based on feedback from the prototyping firm, I’ve modified the initial design extensively. The first version I’d thought about was based on the same principle as the Feeldoe; the redesign uses a separate dildo and harness, with an external computer to receive signals from the sensors in the dildo and transmit them to the vaginal insert. The new design looks, and works, something like this. (Apologies for the horrible animated GIF; art isn’t really my specialty.)

3. The prototyping firm has outlined a multi-step process to develop a workable, manufacturable device. The process would go something like:

Phase 1: Research and proof of concept. This would include researching designs for the sensors on the dildo and the electrodes on the vaginal insert. It would also include a crude proof-of-concept device that would essentially be nothing more than the vaginal insert connected to a computer programmed to simulate the rest of the device.

The intent at this stage is to see if the idea is even workable. What kind of electrodes could be used? Would the produce the right kind of stimulation? How densely arranged could they be? How small could they be? Would the brain actually be able to interpret sensations produced by the electrodes in a way that would trick the wearer into thinking the dildo was a part of the body? If so, how long would that somatosensory rewiring take?

Phase 2: Assuming the initial research showed the idea to be viable, the next step would be to figure out a sensor design, fabricate a microcontroller to connect the sensors to the electrodes, and experiment with sensor design and fabrication. Would a single sensor provide adequate range of tactile feedback, or would it be necessary to multiplex several sensors (some designed to respond to light touch, others to a heavier touch) together in order to provide a good dynamic range? What mechanical properties would the sensors need to have? How would they be built? (We talked about several potential designs, including piezoelectric, resistive polymer, and fluid-filled devices.) How would the sensors be placed along the dildo?

Phase 3: Once a working prototype is developed, the next step is detail design and engineering. This is essentially the process of taking a working prototype and producing a manufacturable product from it. This includes everything from engineering drawings for fabrication to choosing materials to developing the final version of the software.

So. That’s where the project is right now.

The up side? I think this thing could actually work. The down side? It’s going to be expensive.

I have already started investigating ways to make it happen. If we incorporate in Canada, we may be eligible for Canadian financial incentives designed to spur tech research and development.

The fabricating company seems to think the first phase would most likely cost somewhere around $5,000-10,000. Depending on what’s learned during that phase, the development of a fully functional prototype might run anywhere from $50,000 to $100,000, a lot of which hinges on design of the sensors, which will likely be the most challenging bit of engineering. They didn’t even want to speculate about the cost of going from working prototype to manufacturable product; too many unknowns.

I’m discussing the possibility of doing crowdfunding to get from phase 2 to 3, and possibly from phase 1 to 2. It’s not likely that crowdfunding is appropriate for the first phase, because I won’t have anything tangible to offer backers. Indeed, it’s possible that I might spend the initial money and discover the idea isn’t workable.

Ideally, I’d like to find people who think this idea is worth investigating who can afford to invest in the first phase. If you know anybody who might be interested in this project, let me know!

Also, one of the people at the prototyping company suggested the name “Hapdick.” I’m still not sure how I feel about that, but I do have to admit it’s clever.

Want to keep up with developments? Here’s a handy list of blog posts about it:
First post
Update 1
Update 2
Update 3
Update 4
Update 5
Update 6
Update 7
Update 8
Update 9

Sex Tech: Adopting the Brain’s Plasticity

Some while ago, I read an article about a gizmo made of a black and white video camera attached to a grid of electrodes. The idea is that you wear the electrodes on your tongue. Images from the video camera are converted into patterns of electric signals on the electrode, so you “see”–with your tongue–what the camera sees.

Early users of the prototype gizmo would wear a blindfold and then try to navigate around just by the electrical impulses on their tongues. What’s most interesting is not only were they able to do this, but they reported that, after a while, their memories were not of sensations on their tongues, but of seeing a fuzzy, black and white image.

The brain is wonderfully plastic, able to interpret new kinds of sensory input in amazing ways. It can rewire itself to accommodate the new input; in fact, the tongue-electrode thing is being commercialized as a device for the blind.

As I do, when i first heard about this, I naturally thought “how can this be used for sex?” And I think it has fantastic potential.


Imagine, if you will, a wearable dildo, rather like the Feeldoe, that’s designed to have one end inserted in the vagina. Only imagine that we take the same kind of electrodes used in the tongue-camera device, and send signals to the electrodes not from a video camera, but from small touch sensitive sensors mounted just below the skin of the dildo.

These sensors would be mapped onto the electrodes so that when something touches the sensor, you’d feel a corresponding signal from the corresponding electrode.

I’m not an artist, but I made a couple of crude animations to illustrate the idea:

What would happen?

I believe that after a period of adjustment, this dildo would be incorporated into the brain’s somatosensory perception. The brain would, in essence, modify its model of the body to accommodate the dildo–it would, rather quickly I suspect, cease to be perceived as a thing and become perceived as a part of the body. Stimulation of the dildo would begin to feel like stimulation of yourself.

And isn’t that an interesting idea.

The neural density in the walls of the vagina isn’t as great as the neural density of the tongue. I don’t think that’s a problem, though; the neural density of the shaft of the penis isn’t as great, either.

One potentially interesting twist on this notion is to map the most sensitive part of the penis, the underside just below the glans, onto the most sensitive part of the body–the clitoris. The sensors of the shaft would map onto electrodes in the bulb worn inside the vagina, except this part, which would map onto the clitoris–mapping the sensitivity of a natural penis.

Another potentially interesting thing to do is to make the sensors on the dildo pressure sensitive, with firmer touches creating stronger impulses from the electrodes.

Now, there’s a lot of experimentation between this idea and a real device. I don’t know the neural density in the walls of the vagina, but it would impose a limit on how many electrodes could be placed on the dildo. Would there be sufficient density to be able to create a fine tactile sense? I think the answer is probably “yes,” but I’m not sure.

I’m also not sure how much processing would be required. I’m guessing not much; certainly much less than is required with the vision sense. The tongue-vision thing is trying to do something far more complicated; it’s trying to register sufficient information to allow you to navigate a three-dimensional world. A circle seen by the camera might be a lollipop right in front of your face or a billboard far away; because the tongue has no way to represent stereo imagery, there’s no way to tell. So the processor has to allow the operator to be able to zoom in and out, to give the user a sense of how far away things might be. It has to be able to adjust to different lighting conditions.

The dildo, by way of contrast, merely has to respond to physical touch, which maps much more easily onto the array of electrodes. It’s pretty straightforward; if something’s not touching a particular sensor, its electrode isn’t producing a signal. The amount of processing might be low enough to allow the processor to be housed inside the dildo, making the device compact, and not requiring it to be tethered to any electronics.

I think this thing could be hella fun. It would allow people born with vaginas to have a remarkably good impression of what it’s like to be born with a penis.

In a world where I had infinite free time, I’d put together a crowdfunding campaign to try to build a working prototype. Even without infinite time, I’m considering doing this. Thoughts? Opinions?

Want to keep up with developments? Here’s a handy list of blog posts about it:
First post
Update 1
Update 2
Update 3

Some (More) Thoughts on Brain Modeling and the Coming Geek Rapture

The notion of “uploading”–analyzing a person’s brain and then modeling it, neuron by neuron, in a computer, thereby forever preserving that person’s knowledge and consciousness–is a fixture of transhumanist thought. In fact, self-described “futurists” like Ray Kurzweil will gladly expound at great length about how uploading and machine consciousness are right around the corner, and Any Day Now we will be able to live forever by copying ourselves into virtual worlds.

I’ve written extensively before about why I think that’s overly optimistic, and why Ray Kurzweil pisses me off. Our understanding of the brain is still remarkably poor–for example, we’re only just now learning how brain cells called “glial cells” are involved in the process of cognition–and even when we do understand the brain on a much deeper level, the tools for being able to map the connections between the cells in the brain are still a long way off.

In that particular post, I wrote that I still think brain modeling will happen; it’s just a long way off.

Now, however, I’m not sure it will ever happen at all.


I love cats.

Many people love cats, but I really love cats. It’s hard for me to see a cat when I’m out for a walk without wanting to make friends with it.

It’s possible that some of my love of cats isn’t an intrinsic part of my personality, in the sense that my personality may have been modified by a parasite commonly found in cats.

This is the parasite, in a color-enhanced scanning electron micrograph. Pretty, isn’t it? It’s called Toxoplasma gondii. It’s a single-celled organism that lives its life in two stages, growing to maturity inside the bodies of rats, and reproducing in the bodies of cats.

When a rat is infected, usually by coming into contact with cat droppings, the parasite grows but doesn’t reproduce. Its reproduction can only happen in a cat, which becomes infected when it eats an infected rat.

To help ensure its own survival, the parasite does something amazing. It controls the rat’s mind, exerting subtle changes to make the rat unafraid of cats. Healthy rats are terrified of cats; if they smell any sign of a cat, even a cat’s urine, they will leave an area and not come back. Infected rats lose that fear, which serves the parasite’s needs by making it more likely the rat will be eaten by a cat.

Humans can be infected by Toxoplasma gondii, but we’re a dead end for the parasite; it can’t reproduce in us.

It can, however, still work its mind-controlling magic. Infected humans show a range of behavioral changes, including becoming more generous and less bound by social mores and customs. They also appear to develop an affinity for cats.

There is a strong likelihood that I am a Toxoplasma gondii carrier. My parents have always owned cats, including outdoor cats quite likely to have been exposed to infected rats. So it is quite likely that my love for cats, and other, more subtle aspects of my personality (bunny ears, anyone?), have been shaped by the parasite.

So, here’s the first question: If some magical technology existed that could read the connections between all of my brain cells and copy them into a computer, would the resulting model act like me? If the model didn’t include the effects of Toxoplasma gondii infection, how different would that model be from who I am? Could you model me without modeling my parasites?


It gets worse.

The brain models we’ve built to date are all constructed from generic building blocks. We model neurons as though they are variations on a common theme, responding pretty much the same way. These models assume that the neurons in Alex’s head behave pretty much the same way as the neurons in Bill’s head.

To some extent, that’s true. But we’re learning that there can be subtle genetic differences in the way that neurons respond to different neurotransmitters, and these subtle differences can have very large effects on personality and behavior.

Consider this protein. It’s a model of a protein called AVPR-1a, which is used in brain cells as a receptor for the neurotransmitter called vasopressin.

Vasopressin serves a wide variety of different functions. In the body, it regulates water retention and blood pressure. In the brain, it regulates pair-bonding, stress, aggression, and social interaction.

A growing body of research shows that human beings naturally carry slightly different forms of the gene that produce this particular receptor, and that these tiny genetic differences result in tiny structural differences in the receptor which produce quite significant differences in behavior. For example, one subtle difference in the gene that produces this receptor changes the way that men bond to partners after sex; carriers of this particular genetic variation are less likely to experience intense pair-bonding, less likely to marry, and more likely to divorce if they do marry.

A different variation in this same gene produces a different AVPR-1a receptor that is strongly linked to altruistic behavior; people with that particular variant are far more likely to be generous and altruistic, and the amount of altruism varies directly with the number of copies of a particular nucleotide sequence within the gene.

So let’s say that we model a brain, and the model we use is built around a statistical computation for brain activation based on the most common form of the AVPR-1a gene. If we model the brain of a person with a different form of this gene, will the model really represent her? Will it behave the way she does?

The evidence suggests that, no, it won’t. Because subtle genetic variations can have significant behavioral consequences, it is not sufficient to upload a person using a generic model. We have to extend the model all the way down to the molecular level, modeling tiny variations in a person’s receptor molecules, if we wish to truly upload a person into a computer.

And that leads rise to a whole new layer of thorny moral issues.


There is a growing body of evidence that suggests that autism spectrum disorders are the result in genetic differences in neuron receptors, too. The same PDF I linked to above cites several studies that show a strong connection between various autism-spectrum disorders and differences in receptors for another neurotransmitter, oxytocin.

Vasopressin and oxytocin work together in complex ways to regulate social behavior. Subtle changes in production, uptake, and response to either or both can produce large, high-level changes in behavior, and specifically in interpersonal behavior–arguably a significant part of what we call a person’s “personality.”

So let’s assume a magic brain-scanning device able to read a person’s brain state and a magic computer able to model a person’s brain. Let’s say that we put a person with Asperger’s or full-blown autism under our magic scanner.

What do we do? Do we build the model with “normal” vasopressin and oxytocin receptors, thereby producing a model that doesn’t exhibit autism-spectrum behavior? If we do that, have we actually modeled that person, or have we created an entirely new entity that is some facsimile of what that person might be like without autism? Is that the same person? Do we have a moral imperative to model a person being uploaded as closely as possible, or is it more moral to “cure” the autism in the model?


In the previous essay, I outlined why I think we’re still a very long ways away from modeling a person in a computer–we lack the in-depth understanding of how the glial cells in the brain influence behavior and cognition, we lack the tools to be able to analyze and quantify the trillions of interconnections between neurons, and we lack the computational horsepower to be able to run such a simulation even if we could build it.

Those are technical objections. The issue of modeling a person all the way down to the level of genetic variation in neurotransmitter and receptor function, however, is something else.

Assuming we overcome the limitations of the first round of problems, we’re still left with the fact that there’s a lot more going on in the brain than generic, interchangeable neurons behaving in predictable ways. To actually copy a person, we need to be able to account for genetic differences in the structure of receptors in the brain…

…and even if we do that, we still haven’t accounted for the fact that organisms like Toxoplasma gondii can and do change the behavior of the brain to suit their own ends. (I would argue that a model of me that was faithful clear down to the molecular level probably wouldn’t be a very good copy if it didn’t include the effects that the parasite have had on my personality–effects that we still have no way to quantify.)

Sorry, Mr. Kurzweil, we’re not there yet, and we’re not likely to be any time soon. Modeling a specific person in a brain is orders of magnitude harder than you think it is. At this point, I can’t even say with certainty that I think it will ever happen.