Fragments of Dragon*Con: Saturn

One of the (few) panels I actually managed to drag myself to at Dragon*Con was a panel on the Cassini space probe currently poking around Saturn. The panel was hosted by Trina Ray, who works as Science System Engineer for the Cassini program at NASA–which is a pretty damn cool job to have, if you ask me.

In all fairness, it wasn’t the panel I had wanted to see. The panel I’d intended to see, whose name I don’t even remember now, was full; the Cassini panel was next door, and relatively empty, and my feet hurt. So in we went.

It turned out to be one of the best panels of the con.

The Cassini mission was originally intended to explore Saturn and one of its moons, Titan. Along the way, it’s discovered some strange and interesting things, particularly with regards to another of Saturn’s moons, Enceladus.

Now, Enceladus doesn’t really seem, at first glance, like a terribly interesting body. It’s basically a ball of ice about the size of Arizona; cold, distant, orbiting around Saturn like…well, like a big lump of frozen water.

Ah, but the universe is a vast and surprising place, full of weirdnesses too countless to apprehend.

Cassini has, among other things, instruments capable of analyzing and determining the chemical makeup of the matter around it. When it comes to pass that those instruments, while the ship is passing near a giant ball of ice, suddenly register a great deal of water, and then just as suddenly register bupkis, one parsimonious explanation is that the instruments are on the fritz. Another explanation is that there’s a massive honking big jet of water spewing for hundreds of miles out of the big lump of frozen water, but that doesn’t make any sense, does it? Big, cold lumps of frozen water aren’t usually in the habit of spewing out gigantic jets of liquid water, much to the relief of folks who own freezers everywhere.

Now, if there is a big jet of water spewing out of a ball of ice, it’s the sort of thing you’d expect to be able to see, particularly if you arrange to look for it when it’s backlit by the sun. Some rejiggering of orbital mechanics and other rocket-science stuff later, the Cassini was able to take a picture in just that sort of situation, and here’s what it saw:

Lookit that! A big honking jet of water.

Now, this isn’t the sort of thing you’d expect if you were talking about a ball of ice orbiting a distant gas giant. Enceladus is cold. It’s bright white, so it reflects most of what little sun is available from so far away. In fact, it’s actually, for the most part, the coldest object in orbit around Saturn, with surface temperatures near the equator of around -315 degrees Fahrenheit.

And yet, it’s spewing out jets of liquid water. Which is weird. It’s also hot at the poles. Which is weirder. And the heat is concentrated in weird stripes at the south pole, which is weirder still:

So what we’ve got here, basically, is a ball of ice that’s not really a ball of ice at all. It’s being heated by some internal process, it’s spewing out jets of water through fissures in the icy surface, these jets of water have all migrated (or possibly rotated the entire moon) so they’re exactly at the south pole, and…

Oh, wait, I forgot to mention something. It’s not just water. It’s also got organic molecules of various sorts in it.

What we’re left with, then, is a moon that’s got a crust of frozen water with a liquid core of molten water, in much the same way that the earth has a crust of solid rock with a liquid core of molten rock. The water within the moon spews out in huge plumes via a process called “cryovolcanism”–and how cool is that word, by the way? Cryovolcanism. The moon’s south pole is covered with cryovolcanoes.

And they spew out a lot of water. In fact, it looks like the largest ring around Saturn, the E-ring, is created by Enceladus. The ring is a vast structure of little tiny ice crystals, which come from these cryovolcanos on the moon’s surface.

Now, let’s sit back and think about this for a bit.

We have heat. We have liquid water. We have organic molecules. We have, in Ms. Ray’s words, a compelling reason not to ditch the Cassini, when it reaches the end of its life, on Enceladus.

Because, you see, those are the basic ingredients necessary for life–heat, water, organic molecules.


Now, if you look at most conventional science fiction, you see that a great deal of it is concerned with life in outer space–something which has never been demonstrated, but which nevertheless seems rather likely. And much of the bulk of this kind of science fiction concerns itself with life as it might exist in places that are like earth.

Which shows, I think, a failure of imagination.

The human imagination, as I’ve often said, is surprisingly feeble. When given a stunningly vast universe filled with all manner of weirdness, we set our imaginary stories in places that look like Wyoming. When confronted with the breathtaking diversity of biology just here on earth, the best we can come up with is imaginary creatures like Bigfoot–half man, half ape, all lame. When we ask ourselves how such a marvelous, beautiful place as the universe could come to be, the best we come up with is a bearded old guy who created the earth (whose surface is seventy-five percent water) exclusively for man (who has no gills), and since that epochal moment of creation has largely confined himself to a near obsession with women’s clothing and the occasional vaguely Mary-shaped swirl in somebody’s French toast.


I came away from the panel impressed all over again with the majesty and incredible, mind-boggling wonder and beauty of the physical universe. This stuff is so incredible, so fantastical, so amazingly bizarre and splendid that it’s hard to understand how anyone, confronted with this, could not be awed by the complexity and surprises the universe has to offer.

After it was over, Shelly turned to me and said “How come more people know about Britney Spears’ sister than know about this?” And you know, I don’t have an answer.

Some thoughts on complexity and human consciousness

A couple weeks ago, I decided to take out the trash. On the way to the trash can, I thought, “I should clean out the kitty litter.” Started to clean the litterbox, and thought, “No, actually, I should completely change the litter.” Started changing the litter, then realized that the cat had dragged some of it out on the floor. “Ah, I should get out the vacuum,” thought I.

Next thing you know, I’m totally cleaning the apartment, one end to the other.

On my way out to the dumpster, I started thinking about hourglasses. And that’s really what this post is about.


If you have ever watched the sand falling in an hourglass, you know how it goes. The sand in the bottom of the hourglass builds up and up and up, then collapses into a lower, wider pile; then as more sand streams down, it builds up and up and up again until it collapses again.

I don’t think any reasonable person would say that a pile of sand has consciousness or free will. It is a deterministic system; its behavior is not random at all, but is strictly determined by the immutable actions of physical law.

Yet in spite of that, it is not predictable. We can not model the behavior of the sand streaming through the hourglass and predict exactly when each collapse will happen.

This illustrates a very interesting point; even the behavior of a simple system governed by only a few simple rules can be, at least to some extent, unpredictable. We can tell what the sand won’t do–it won’t suddenly start falling up, or invade France–but we can’t predict past a certain limit of resolution what it will do, in spite of the fact that everything it does is deterministic.

The cascading sequence of events that started with “I should take out the trash” and ended with cleaning the apartment felt like a sudden, unexpected collapse of my own internal motivational pile of sand. And that led, as I carried bags of trash out to the dumpster, to thoughts of unpredictable deterministic systems, and human behavior.


The sand pouring through the hourglass is an example of a Lorenz system. Such a system is a chaotic system that’s completely deterministic, yet exhibits very complex behavior that is exquisitely sensitive to initial conditions. If you take just one of the grains of sand out of the pile forming in the bottom of the hourglass, flip it upside down, and put it back where it was, the sand will now have a different pattern of collapses. There’s absolutely no randomness to it, yet we can’t predict it because predicting it requires modeling every single action of every single individual grain, and if you change just one grain of sand just the tiniest bit, the entire system changes.

Now, the human brain is an extraordinarily complex system, much more complex both structurally and organizationally than a pile of sand, and subject to more complex laws. It’s also reflexive; a brain can store information, and its future behavior can be influenced not only by its state and the state of the environment it’s in, but also by the stored memories of past behavior.

So it’s no surprise that human behavior is complex and often unpredictable. But is it deterministic? Do we actually have free will, or is our behavior entirely determined by the operation of immutable natural law, with neither randomness nor deviance from a single path dictated by that immutable natural law.

We really like to believe that we have free will, and our behavior i subject to personal choice. But is it?


In the past, some Protestant denominations believed in pre-ordination, the notion that our lives and our choices were all determined in advance by an omniscient and omnipotent god, who made our decisions for us and then cast us into hell when those decisions were not the right ones. (The Calvinist joy in the notion that some folks were pre-destined to go to hell was somewhat tempered by their belief that some folks were destined to go to heaven, but on the whole they took great delight in the idea of a fiery pit awaiting the bulk of humanity.)

The kind of determinism I’m talking about here is very different. I’m not suggesting that our paths are laid out before us in advance, and certainly not that they are dictated by an outside supernatural agency; rather, what I’m saying is that we may be deterministic state machines. Fearsomely complicated, reflexive deterministic state machines that interact with the outside world and with each other in mind-bogglingly complex ways, and are influenced by the most subtle and tiny of conditions, but deterministic state machines nonetheless. We don’t actually make choices of free will; free will appears to emerge from our behavior because it is so complex and in many ways so unpredictable, but that apparent emergent behavior is not actually the truth.

An uncomfortable idea, and one that many people will no doubt find quite difficult to swallow.

We feel like we have free will. We feel like we make choices. And more than that, we feel as if the central core of ourselves, our stream of consciousness, is not dependent on our physical bodies, but comes from somewhere outside ourselves–a feeling which is all the more seductive because it offers us a way to believe in our own immortality and calm the fear of death. And anything which does that is an attractive idea indeed.

But is it true?


Some folks try to develop a way to believe that our behavior is not deterministic without resorting to the external or the supernatural. Mathematician Roger Penrose, for example, argues that consciousness is inherently dependent on quantum mechanics, and quantum mechanics is inherently non-deterministic. (I personally believe that his arguments amount to little more than half-baked handwaving, and that he has utterly failed to make a convincing, or even a plausible, argument in favor of any mechanism whatsoever linking self-awareness to quantum mechanics. To me, his arguments seem to come down to “I really, really, really, really want to believe that human beings are not deterministic, but I don’t believe in souls. See! Look over there! Quantum mechanics! Quantum mechanics! Chewbacca is a Wookie!” But that’s neither here nor there.)

Am I saying that the whole of human behavior is absolutely deterministic? No; there’s not (yet) enough evidence to support such an absolute claim. I am, however, saying that one argument often used to support the existence of free will–the fact that human being sometimes behave in surprising and unexpected ways that are not predictable–is not a valid argument. A system, even a simple system, can behave in surprising and unpredictable ways and still be entirely deterministic.


Ultimately, it does not really matter whether human behavior is deterministic or the result of free will. In many cases, humans seem to be happier, and certainly human society seems to function better, if we take the notion of free will for granted. In fact, and argument can be made that social systems depend for their effectiveness on the premise that human beings have free will; without that premise, ideas of legal accountability don’t make sense. So regardless of whether our behavior is deterministic or not, we need to believe that it is not in order for the legal systems we have made to be effective in influencing our behavior in ways that make our societies operate more smoothly.

But regardless of whether it’s important on a personal or a social level, I think the question is very interesting. And I do tend to believe that all the available evidence does point toward our behavior being deterministic.

And yes, this is the kind of shit that goes on in my head when I take out the trash. In fact, that’s a little taste of what it’s like to live inside my head all the time. I had a similar long chain of musings and introspections when I walked out to my car and saw it covered with pollen, which I will perhaps save for another post.

Fragments of the Weekend: Fractal Misery

On the flight back from Chicago yesterday, I sat next to a woman who might just be The Most Miserable Person in the World. And I say that without even having met all the people in the world.

She stayed on the phone from the moment we boarded to the moment we pushed off from the gate, and her entire conversation was a litany of her personal misery, described in the most minute detail imaginable. For nearly forty minutes, she shared her misery with whoever was on the other end of the phone, and me, and everyone else within earshot; we all learned of how unhappy she was on the trip to the airport, how bad the traffic was, how the bus arrived five minutes late, how heavy her suitcase was, how badly she needed to use the washroom on her trip through security. We learned how she did not like the man in front of her at the security checkpoint, how the employees of the airport would not help her take her shoes off, how difficult it was for her to find her ID in her purse.

And through it all, i learned many things about misery I’ve never before known. Her misery was fractal in nature; every part of her misery, when looked at in isolation, was just as bad as her misery taken as a whole.

Take her need to use the washroom, for instance. She zoomed in on that part of her misery, a trip as dizzying as any exploration of the Mandelbrot set. The urge began before she was even through security, making her misery at the whole miserable process just that much greater. And then, once past security, when she thought she would be able to do something about it, the man ahead of her dropped his boarding pass. She tried to tell him that he had dropped his boarding pass, but he would not listen to her; even while she chased after him, still he did not listen. And he moved away from the washrooms, increasing the time she had to travel to get there. And her shoes weren’t properly tied, so walking to the washroom was that much harder…especially in light of the carry-on bag she had to carry, which was heavy and tired her…

She relayed his tale in a voice clearly practiced, honed for the task like a sushi chef’s favorite knife, the tool fitted perfectly to the job to which it was put. Each vowel held just long enough to communicate the injustice of a cold world not appropriately aware of her needs, her suffering; consonants clipped in such a way as to express her contempt and disdain for the formless, faceless forces of malice arrayed against her.

It made me wonder if there is some quantum limit, some fundamental point past which no further resolution becomes possible. Her misery was tracked in such minute detail, and reported to such an astonishingly high fidelity, that I thought perhaps no. Perhaps there is no point past which the form and shape of her misery becomes lost in the fog of quantum uncertainty; perhaps her internal model of her misery really is infinite in its detail, so that any magnification, any level of zoom reveals more edges and whorls, more information about the precise contours of her suffering.

She continued her phone call until the flight attendant made her turn off the phone, her last comment to her unseen companion a bitter complaint about being forced to hang up.

There is a lesson in here somewhere, which your humble scribe is not clever enough to tease out.

I did a BAD THING…

…I installed a copy of the old-school (circa 1997) real-time strategy game Age of Empires II on David’s computer, then networked his computer with mine.

I suspect neither of us will be sleeping tonight. “Now, watch! Watch as I smash your village with my siege onagers of DOOM! Hear the wailing of your women and children; they are as music to my ears!”


Quote of the day (via Shelly):

“You are entitled to your own opinion. You are not, however, entitled to your own facts.”


And finally, via physicsduck, recreation for people who think that base jumping is too boring and safe. Dear God. I can’t believe that this actually worked, and nobody died.

Teaching a Dog Calculus

This is actually a post about transhumanism and Outside Context Problems, and an epiphany I had last time I was in Chicago.

But first…

God damn did I wake up with a bad case of the hornies this morning. Jesus Christ in Heaven, I want to fuck. I want to feel soft skin against mine. I want to trace the curve of the neck with teeth and tongue. I want to hear the little intake of breath when I discover a sensitive spot. I want to rest my hand on the curve of the hip, I want to explore the roundness of breast with my fingertips. I want to run fingernails lightly up the back of the neck and see goosebumps form. Holy fuck it’s distracting.

Also, when I crawled out of bed and walked stumbled into the bathroom this morning, I was all like “Ow! Ow! Ouch! Ow! What the hell?” Some time last night, it seems, the cat had scoured the house for every smallish, vaguely cylindrical object he could find, and hidden them all underneath the rug in the bathroom. Pens, a plastic travel tube of Advil, a small bullet vibrator, an AA battery…it was like walking on marbles. WTF?

None of that is what I’m actually here to say.


I’ve been thinking a great deal these days about Outside Context Problems. Put briefly, an Outside Context Problem is what happens when a group, society, or civilization encounters something so far outside its own context and understanding that it is not able even to understand the basic parameters of what it has encountered, much less deal with it successfully. Most civilizations encounter such a problem only once.

For example, you’re a Mayan king. Life is pretty good for you; you’ve built a civilization at the pinnacle of technological achievement, you’ve dominated and largely pacified any competition you might have, you’ve created many wondrous things, and life is pretty comfortable.

Then, all at once, out of the blue, some folks clad in strange, impervious silver armor show up at your doorstep. They carry long sticks that belch fire and kill from great distances; some of them appear to have four legs; they claim to come from a place that you have never in your entire life even conceived might exist…

Civilizations that encounter Outside Context Problems end. Even if some members of the civilization survive, the civilization itself is irrevocably changed beyond recognition. Nothing like the original Native American societies exists today in any form that the pre-Columbians would recognize.

Typically, we think of Outside Context Problems in terms of situations that arise when one society has contact with another society that’s radically different and technologically far more advanced. But I don’t think it necessarily has to be that way.


In a sense, we are, right now, hard at work building our own Outside Context Problem, and it’s going to be internal, not external.

Right now, as I type this, one of the hottest fields of biomedical research is brain mapping and modeling. I’ve mentioned several times in the past the research being done by a Swiss group to model a mammalian brain inside a supercomputer; such a model is essentially a neuron-by-neuron, connection-by-connection emulation of a brain in a computer. Such an emulation will, presumably, act exactly like its biological counterpart; it is the connections and patterns of information, not the physical wetware, that makes a brain act like it does.

This group claims to be ten years from being able to model a human brain inside a computer. Ten years, and we may see the advent of true AI.


Let me backtrack a little. The field of AI has, so far, been disappointing. For decades, we have struggled to program computers to be smart. The problem is, we don’t really quite know what we mean by “smart.” Intelligence is not an easily defined thing; and it’s not like you can sit down and break up generalized, adaptive intelligence into a sequence of steps.

Oh, sure, we’ve produced expert systems that can design computer chips, simulate bridges, and play chess far better than a human can. In fact, we don’t even have grandmaster-level human/machine chess tournaments any more, because the machines always win. Always. Deep Blue, the supercomputer that beat human grandmaster Garry Kasparov in a much-publicized tournament, is by modern standards a cripple; ordinary desktop PCs today are more powerful.

But these are simple, iterative tasks. A chess-playing computer isn’t smart. It can’t do anything besides play chess, and it approaches chess as a simple iterative mathematical problem. That’s about where AI has been for the last four decades.

New approaches, though, are not about programming computers to act smart. They are about taking systems which are smart–brains–and rebuilding them inside a computer. If this approach works, we will create our own Outside Context Problem.


Human brains are pretty pathetic, from a hardware standpoint. Our neurons are painfully, agonizingly slow. They are slow to respond, they are slow to fire, they are slow to reset after they have fired, and they are slow to form new connections. All these things limit our cognitive capabilities; they impose constraints on how adaptable our intelligence is, and how smart we can become.

Computers are fast. They encode new information rapidly and efficiently. Raw computing power available from a given square inch of silicon real estate doubles roughly every eighteen months. Modeling a brain in a computer removes many of the constraints; such a modeled brain can operate more quickly and more efficiently, and as more computer power becomes available, the complexity of the model–the number of neurons modeled, the richness of the interconnections between them–increases too.


We humans like to make believe that we are somehow the apex of creation–and not just of creation, but of all possible creation. It pleases us to imagine that we are created in the image of some divine heavenly architect–that the universe and everything in it was made by some sapient being, that that sapient being is recognizable to us, and that that sapient being is like us. We like to tell ourselves that thre is no limit to human imagination, that human intellect can understand and achieve anything, and so on.

Now, all of this is really embarrassingly self-serving. It’s also easy enough to deflate. The human imagination is indeed limited, though by definition limitations in the things you can conceive of tend to be hard to see, because you…can not conceive of things you can not conceive of. (As one person once challenged me, without apparent irony: “Name something the human imagination can’t conceive of!”)

But its relatively easy to find some of the boundaries of human imagination. For example:

• Imagine one apple. Just an apple, floating alone on a plain white background. Easy to do, right?
Imagine three apples, perhaps arranged in a triangle, floating in stark white nothingness. Simple, yes? Four apples. Picture four apples in your head. Got it?

Now, picture 17,431 apples in your head, each unique. Visualize all of them together, and make your mental image contain each of those apples separately and distinctly. Got it? I didn’t think so.

• Imagine a cube in your head. Think of all the faces of the cube and how they fit together, Rotate the imaginary cube in your head. Got it going? Good.

Now imagine a seventeen-dimensional cube in your head. Picture what it would look like rotating through seventeen-dimensional space. Got it?

The first example indicates one particular kind of boundary on our imaginations: our limited resolving power when it comes to holding discrete images in our imagination. The second shows another boundary; our imaginations are circumscribed by the limitations of our experiences, as perceived and interpreted through finite (and, it must be said, quite limited) senses. Quantum mechanics and astrophysics often pose riddles whose math suggests behaviors we have a great deal of difficulty imagining, because our imaginations were formed through the experiences of a very limited slice of the universe: medium-sized, medium-density mass-bearing objects moving quite slowly with respect to one another. Go outside those constraints, and we may be able to understand the math, but the reality of the way these systems works is, at best, right at the threshold of the limitations of our imaginations.


Everyone who has ever owned a dog knows that dogs are capable of a surprisingly sophisticated sort of reasoning. Dogs understand that they are separate entities; they interact with other entities, such as other dogs and humans, in complex ways; they can differentiate between other living entities and non-living entities, for the most part (tough I’ve seen dogs who are confused by television images); they have emotional responses that mirror, on a simple scale, human emotional responses; they are capable of planning, problem-solving, and analytical reasoning.

They can not, however, learn calculus.

No matter how smart your dog is, there are things it can not understand and will never understand because of the biological constraints on its brain. You will never teach a dog calculus; in fact, a dog is not capable of understanding what calculus is.

Yes, I know you think your dog is very smart. No, your dog can’t learn calculus. Yes, you can too, if you set your mind to it; the point here is that there are realms of knowledge unavailable to the entire species, because all dogs, no matter how smart they may be in comparison to other dogs, lack the necessary cognitive tools to get there.

The intelligence of every organism is circumscribed in part by that organism’s physical biology. And just as they are entire reals and categories of knowledge unavailable to a dog, so too are there realms of knowledge unavailable to us. What are they? I don’t know; I can’t see them. That’s exactly the point.


To get back to the idea of artificial intelligence: A generalized AI would in many ways not be subject to the same limitations we are. One nice thing about modeled brains that isn’t true of human brains is that we can easily tinker with them. The human brain is limited in the total number of neurons within it by the size and shape of the human pelvis; we can’t fit larger brains through the birth canal. We have, in essence, encountered a fundamental evolutionary barrier.

Similarly, we can’t easily make neurons faster; their speed is limited by the complex biochemical cascade of events which makes them fire (contrary to popular belief, neurons don’t communicate via electrical signals; they change state electrochemically, by the movement of charged ions across a membrane, and the speed with which a signal travels is dependent on the speed with which ions can propagate across the membrane and then be pumped back again). They are limited in how quickly they can learn new things by the speed with which neurons can grow new interconnections, which is pretty painful, really.

But a model of a brain? What if we double the number of neurons? Increase the speed at which they send signals? Increase the efficiency with which new connections form? These are all obvious and logical paths to explore.

And the thing about generalized AI is that it’s so goddamn useful. We want it, and we’re working very hard toward it, because there are just so many things that our current, primitive computers are poor at, that generalized Ai would be good at.

And one of those things, as it happens, is likely to be improving itself.


The first generalized AI will be a watershed. Even if it isn’t very smart, it can easily be put to the task of making AIs that are smarter. And smarter still. Hell, just advances in the underlying processor power of the computer beneath it–whatever that computer may look like–will probably make it smarter. Able to think faster, hold more information, remember more…and able to have whatever senses we give it, including senses our own physiology doesn’t have.

The first generalized AI might not be smarter than us, but subsequent ones will, oh yes. You can bank on that. And that soon presents an Outside Context Problem.

Because how do we relate to a sapience that’s smarter than we are?

In transhumanist circles, this is called a singularity–a change so profound that the people before the singularity can not imagine what life after the singularity is like.

There have been many singularities throughout human history. The development of agriculture, the Iron Age, the development of industrialization–all of these created changes so profound that a person living in a time before these things could not imagine what life after these things is like. However, the advent of smart and rapidly-improving AI is different, because it presents a singularity and an Outside Context Problem all rolled up into one.

In past singularities, the fundamental nature of human beings and human intelligence have not changed. A Bronze Age human is not necessarily dumber than an Iron Age human. Less knowledgeable, perhaps, but not dumber. The Bronze Age human could not anticipate Iron Age technology, but if they meet, they will still recognize each other.

But a smarter-than-us AI is different, in the ways we are different from a dog. We would not–we cannot–understand the perception or experience of something smarter than we are, ay more than a dog can understand what it means to be human. And that presents an interesting challenge indeed.

Civilizations tend not to survive contact with Outside Context Problems.


Which brings me, at last, to an epiphany that I had while I was walking with dayo in Chicago.

Transhumanism is the notion that human beings can become, with the application of intelligence and will, more than we are right now. I’ve talked about it a great deal in the past, and talked about some of the reasons I am a transhumanist.

But here’s a new one, and I think it’s important.

Strong AI is coming. It’s really only a matter of time. We are learning that our own intelligence is the result of physical processes within our brain, not the result of magical supernatural forces or spirits. We are working on applying the results of this knowledge to the problem of creating things that are not-us but that are smart like us.

Now, there are several ways we can approach this. One is by creating models of ourselves in computers; another is by using advances in nanotechnology and biomedical science to make ourselves smarter, and improve the capabilities of our wet and slow but still serviceable brains.

Or, we can create something not based on us at all; perhaps by using adaptive neural networks to model increasingly complex systems in a sort of artificial evolutionary system, trying things at random and choosing the smartest of those things until eventually we create something as smart as us, but self-improving and altogether different.

Regardless, we have a choice. We can make ourselves into this new whatever-it-is, or we can make something entirely independent from us.

However we make it, it will likely become our successor. Civilizations tend not to survive contact with Outside Context Problems.

If we are to be replaced–and I think, quite honestly, that that is only a matter of time as well–I would rather that we are replaced by us, by Humanity 2.0, than see us replaced by something that is entirely not-us. And I think transhumanism, refined down to its most simple essence, is the replacing of us by us, rather than by something that is not-us.

Some thoughts on computer security and credulity

So recently Business Week magazine ran an article about keylogger software being used in espionage. Essentially, defense contractors are being tricked into infecting their computers with keylogger malware, sent in targeted emails that appear to come from the Pentagon and other governmental sources.

The thing I find interesting about this, and also about things like the Storm and Kraken worms, is that they don’t take advantage of security flaws or vulnerabilities. They don’t attack holes in a computer’s operating system or applications, and they don’t rely on technical exploits of programming errors. These attacks all rely on tricking the victim into deliberately, intentionally infecting himself.

For that reason, I don’t think there’s a technological solution. The solution to a human gullibility problem isn’t in better programming or more elaborate firewalls; it’s in user education. No matter how sophisticated and bulletproof a security system is, there’s no defense against a person who deliberately chooses to permit someone through it.

But when it comes to the Intertubes, folks don’t get that.


If we had a situation where a criminal walked into a bank and, without weapons or violence, tricked a security guard into opening the vault for him and handing him all the money inside, we would not say “Oh, we need to build bigger vaults with thicker doors and more complicated locks!” It’s obvious to anyone who thinks about something like that that a bigger door or thicker walls won’t prevent someone from tricking a gullible guard into unlocking the door.

Yet with computer malware, we tend to jump on technological solutions. Someone in China tricks an American defense contractor into deliberately installing a key logger on his computer, and everyone says “We need tighter computer security and more computer defenses.” Which is as pointless and ineffectual as saying “we need thicker bank vault walls” if someone persuades the guard to intentionally, deliberately unlock the vault door and hand him the money.

What we need isn’t better computer security; better computer security will not and can not address this kind of problem. What we need is less gullible people.


A few weeks back, someone posted an ad on Craigslist saying that they were moving suddenly and they needed to get rid of everything in their house, including their horse. They said that the house would be unlocked and anyone who wanted to could come and take anything they liked. Hundreds of people showed up and ransacked the house, even taking light fixtures and plumbing fixtures.

Needless to say, the Craigslist ad was bogus. Some people had robbed the house earlier, then posted the ad to conceal the evidence of their robbery.

Of course, the police showed up, but what was most interesting was how indignant the folks who ransacked the house were. They were angry and upset that the police tried to stop them. Many of them waved printouts of the Craigslist ad around, as if it justified what they were doing. They genuinely, sincerely believed that the ad on Craigslist meant they were doing nothing wrong.

That’s the mentality a lot of folks–including folks who ought to know better, including defense contractors–have. They truly believe that if an email says it is from someone they know and they should download and run the attached program, it must be OK to do. They sincerely think that if they see it in an email, it can not possibly be false. And that gulllibility makes them easy to dupe.


These are not idiots. If a person walked up to them on a street and said “I live at 423 Main Street but I have to move in a hurry, so go into that house and take anything you like,” they’d be like “Yeah, right.” If someone walked into their office and said “I’m from the pentagon, take this CD and run the program that’s on it,” they’d never in a million years do it.

But because it’s on the Intertubes, somehow it gets past their bullshit filters, and they suspend their ordinary skepticism. And I think that’s really, really interesting.


One of my all-time favorite books is Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time, by Michael Shermer, who’s one of my personal heroes. I met him briefly at a science fiction convention last October, and he’s just as amazing in person as he is in print.

One of the things he talks about, and one of the things I’ve written about as well, is the idea of the brain as a “belief engine,” a tool for forming beliefs about the physical world. As a tool for survival, the brain works amazingly well, but survival pressures have tended to shape and mold it in such a way that its default state is to accept ideas uncritically rather than reject them. For our early hunter-gatherer ancestors, the consequences of accepting a false belief (“keeping this magic stone in my pocket will help me ward off evil spirits”) were generally less dire than the consequences of rejecting true beliefs (“a leopard is dangerous to me,” “keeping upwind of my prey will cause my prey to escape more often”), and so we have developed these amazing brains that find it much easier to accept than to reject ideas.

On top of that, our brains are so highly optimized for efficient and rapid pattern recognition that they can tend to see patterns even where none exist (“when I updated to OS X 10.4.11, my hard drive failed; the update was responsible for the failure”).


I wrote an essay about the belief engine a while back. I think that it applies to things like Internet hoaxes and Trojan-horse malware in part because we are wired by selective adaptation to accept ideas uncritically, but we are also taught from a young age when that kind of uncritical acceptance is dangerous.

Everyone (well, almost everyone) learns from an early age not to trust strangers. So if a stranger stopped us on the street and said “I live in the house at the end of the block but I have to leave, so walk on in and take whatever you like,” there’s no way we’d believe him. But we aren’t taught to distrust the Internet.


To make matters worse, I think the Internet confuses people by messing with the signs we have been taught to accept to mark trustworthy people and institutions. We are taught to separate folks within our sphere of trust from folks outside of it, but we are not taught that this trust doesn’t extend to the Internet.

So, for example, most of us trust our mothers. If we receive an email and it’s got Mom’s “from” address on it and claims to be a greeting card, we’ll likely download it and run it without a second thought, because we trust Mom. What we haven’t been taught is not to trust the From: address on any email. People don’t realize how easily that is faked; the email is trusted because it bears the mark of being from a person inside our sphere of trust, but that mark itself is untrustworthy.

Same deal for a defense contractor who receives an email that claims to be from his Pentagon contact. Because the email carries a mark of a person inside the sphere of trust, the email is accepted.

Phishing scams rely on that, too. We mostly trust our banks, and we are familiar with what our bank Web site looks like. So we associate things like the bank’s logo and the bank’s Web site layout, which are familiar and comforting, with that feeling of trust. We so strongly associate things like the bank’s logo witht he bank itself that just the appearance of the bank’s logo can make whatever it’s attached to seem trustworthy.

In contemporary society, this is intentional; businesses do a lot of work and spend a lot of money to associate things like logos with the business, and to attach the logo to our emotional response. But what that means is the logo and the familiarity of the Web site layout make us trust the fraudulent phishing site. These things are more important than, say, the padlock that shows a secure connection, or the URL of the site, because we have not been taught about those things but we have been taught to associate the logo with our feelings of trust in the bank, so that makes us fall for the scam Web sites, and we voluntarily turn over information that otherwise we would be unlikely to give to anyone.


So again what happens is that we see the Internet as a technological construction, and we seek technological solutions to security problems, when perhaps it might be more effective to see the Internet as a social construct, and teach people “never trust an email from anyone” or “never trust a Web site that does not show a padlock on it” the same way we teach people “don’t talk to strangers” and “don’t give your bank account number to people you don’t know.”

I’m not saying there’s no need for technological security, mind you. There are still folks who exploit technical flaws in computers, or who attack computers using technical attacks like DNS cache poisoning or DNS rebinding attacks. Securing computer networks is still a necessary thing to do, and on that score the Internet as it now exists gets pretty dismal marks.

But what gives the Internet its power is the way people use it, not the hardware that makes it up. It is a social construct; it’s essentially nothing more than a communication medium. And any time you have communication, you have the potential for cons and fraud. I really do think that we have not yet, as a society, learned to extend the same degree of distrust to the Internet as we have to things in “real life,” and as a result the natural tendency for us to believe rather than disbelieve is easily exploited on the Internet.

Frolicon, and some thoughts on BDSM

About three weekends ago, figment_j and dayo came down to visit for Frolicon, a BDSM/alternative sexuality convention here in Atlanta. Now, you might think that sounds like a lot of fun…and you’d be right. We met up with datan0de and femetal, and more than a few good times were had by all.

Now, in some ways I think that my own approach to BDSM–or at least the things bout BDSM that draw me to it–are a little unusual, at least in comparison to what I see in others. I’ll get to that in a minute. First, some notes about the con itself.

Which was a blast.

lolitasir gave a demo workshop on fisting, which is one hell of an awesome way to start a weekend. Some how datan0de–at least I think it was him, it may have been one of his clones–ended up being drafted into the demo, playing the part of “lube boy.” And, all in all, there are worse positions to be in. Especially considering it is, y’know, a great way to get up close and personal with a woman writhing in ecstasy, which is always fun.

I also learned to put in a pair of contact lenses. I have a set of contacts that looks like cyborg eyes, and I swear, I have no idea how you folks who wear these damn things all the time do it so easily. Half an hour of working and swearing, it took, just to get them in, and another half an hour to get them back out again.

Lots of panels (and dayo taught me a really cool no-knot two-column tie I’ll be putting up on Symtoys at some point), lots of sushi. Going out for sushi straight from the con, in fetishwear and the whole bit, was fun.

And, of course, lots of play parties.


I had the opportunity to play with dayo and figment_j at the same time, and that by itself was a tremendous amount of fun. Play with each of them is effortless and tends to flow very well, and the three of us together have that same dynamic. figment_j and I had the pleasure of co-topping dayo, with floggers and crops and knives, oh my… After the fisting, it was time for us to turn our attention to figment_j, which is where I really noticed that my style of play, even at play parties, isn’t the same as many of the other people I see play.


I first played with figment_j in a public play party last year. One of the things that I found with her, and one of the things that delights me a great deal about her, is her fearlessness when it comes to exposing herself emotionally. The two of us seem to have a very natural kind of unspoken language when we play, that extends far beyond the physical things we do.

It’s been my observation that many of the people I’ve watched play in public are willing to expose their bodies for whatever scenarios they and their partners create, but are less willing to expose their emotional selves. And certainly in a situation where a person is playing casually, especially with a new partner, that makes sense.

But one of the things that most delights me about figment_j is how easily and readily she makes herself emotionally vulnerable, and how effortlessly we carve out a very private space even when we’re surrounded by people. It was fun to see how that private space expanded to include dayo, too.

I’ve experienced the same thing with dayo, and it does seem to me that this kind of intimacy is not the norm in public play spaces. It takes, I think, a very particular kind of courage to play that way.

Later, when figment_j and I were talking about it, she was expressing frustration that she can’t do the kind of edgy physical play that she’s seen other people do. There was, for example, a person being whipped with singletails at the same time as we were playing–something that’s definitely a nontrivial kind of scene.

I think, though, that the best measure of an activity is in how the people involved respond to it, and in the psychological environment it creates, rather than in the nature of the physical activities, or the amount of bruises it leaves. (Don’t get me wrong; I love leaving marks on my partners, oh yes. Bu that’s not the measure of the quality of the encounter, not by a long shot.)


I get quite a lot of email from my BDSM pages every month, and one common theme I’ve seen in a lot of the email is people saying “I’ve heard of [insert some kind of activity here], and I just don’t see myself getting into that–I’m worried that I’m not a ‘good’ submissive.”

I think that kind of idea can be especially easy to fall into at a play party, where you might be exposed to a wide range of different activities–singletail play, knife play, piercing play, needle play–I’ve even watched people doing fire play at a play party (sans fire extinguisher, which kind of ticked me off, but that’s a whole different issue altogether). Since it’s easier to see the physical side of the things going on than it is to see the emotional side, I think the tendency exists to say ‘So that’s what BDDSM is all about; I don’t want to do those things; that must mean I’m not really doing it right.’

But for me, the stuff that happens behind my partner’s eyes is the interesting stuff. The various techniques that get us there are more or less irrelevant; they’re just the path to the destination. It’s the destination itself, not the road you take to get there, that matters.

And I do realize that approach is somewhat unusual. For many people I’ve talked to, it’s the activities themselves that matter. And, yes, I do get that, too. Being flogged, for example, just plain feels good–in fact, I’ve seen people reach orgasm just from a flogging alone. For many people, in the right context and with the right partner, things that are painful become intensely pleasurable. And that’s totally cool. I like getting my partner off; I like doing things that my partner likes.

But I also like creating that shared emotional vulnerability while we’re at it. That, for me, extends the activity beyond physical pleasure, into a much more emotionally charged space. It creates a physical and emotional dance that, properly done, really lets you see right into your partner’s soul.

And I dig that.

Some thoughts on communication

Eliot Spitzer

This man has a problem. Actually, he has several problems — he’s just resigned from the office of the governor of New York, he’s facing an FBI probe, and his wife is well and truly pissed off at him. But really, those aren’t his problems; they’re merely the consequence of his real problem.

As you’re no doubt aware unless you live under a rock or in Kansas City, this man is in a lot of trouble. He’s in a lot of trouble for a very simple reason: he had sex with this woman.

Now, I already know what you’re thinking. “How can the person that someone has sex with possibly have any bearing on his ability to govern the state? What, did she break into his office and steal government funds? Was she engaged in industrial espionage for a shadowy group of French business executives? What difference can it possibly make?”

And I agree with you. I won’t pretend to understand our cultural obsession with the penises of elected government officials; it’s a little weird, and a little unhealthy, and a little stupid.

That’s not the problem, though.


The woman into which Governor Spitzer inserted his member is, or rather was, a very high-priced call girl, which is the euphemism we use for prostitutes who make more than a certain amount of money. The term “prostitute” carries to our sexually repressed, Puritanical ears certain…unsavory connotations, but fortunately, as with all things American, a sufficient application of money is often effective at removing the stain. Hence, a person who charges $100 for sex is a prostitute, whereas a person who charges $4,500 an hour for sex, as Ms. Dupre is alleged to have done, is a “call girl.”

Now, I don’t know about you, Gentle Reader, but when I hear of folks making $4,500 an hour for having sex, all I can think is that I’m in the wrong goddamn business. And hey, if Ms. Dupre can make that kind of money without even getting out of bed, more power to her, says I. I frankly have no interest in the adventures of a politician’s penis, nor in the amount of money those adventures cost. Some people spend their mad money on skiing, some folks buy $1,200 titanium golf clubs…hell, if I were to trade money for recreation, and those were my choices, you could bet I wouldn’t be buying the golf clubs. Stupid goddamn sport anyway…but I digress.

Now, it appears that Mr. Spitzer may have spent official State of New York funds on doing the horizontal mambo with Ms. Dupre, and engaged in some complicated financial handwaving to conceal that. Which is a problem; in fact, I believe there are even words for that sort of behavior. “Fraud,” for one. And “corruption,” that’s a good word. “Embezzlement,” too.

That’s still not the problem, though.


As news of this whole penis-related affair broke, the predicable wailing in the media began. How can this happen?” some people asked. (Well, it’s really quite simple. You take some money, you give it to a person-I’m told it’s customary to leave it on the dresser–and in return, that person engages in sexual intercourse with you.) “Who would think that a powerful political figure would do such a thing?” other people–presumably, people who are not students of history–asked.

Magazines ran articles about how Men Are Like That, and Our Biology Makes Men Cheat And Women Fidelitous…because there’s nothing we like more than pop junk science that affirms cultural norms. Religious leaders wailed about The Death of Public Morality (from the smell of the corpse, I think it’s probably been dead for about as long as we’ve walked upright on three legs…but again, I digress).

Some folks wondered Why A Powerful And Successful Man Would Need a Prostitute, which betrays a profound lack of insight into the nature of power. A man in Mr. Spitzer’s position doesn’t pay for sex because he can’t get his dick wet any other way; he pays for sex because his money is an extension of his power. By exchanging money for sex, the way he wants it, on his terms, when he wants it, with the implied understanding that the person to whom he is giving this money is going to go away when it’s over, he is exerting power over the world around him; he can call up sex, and dictate its terms, at any time he pleases.

Now, far be it from me to cast any negative words on the notion of mixing power and sex; far from it. I’m a big fan of the idea of sex as an expression of power, and indeed spent about two hours last night expressing sexual power with dayo, a process that involved two vibrators, sixteen feet of rubber tubing, and a great deal of screaming. (Okay, so I lied about the rubber tubing, and once again, I digress.)

I personally don’t project power by means of money, largely because…err, I haven’t got enough money to make a very compelling statement. “Drop your pants and I’ll give you a dollar” doesn’t really do it, you know? Also, though, because I really dont like that particular expression of power; the business of sex tends to commodify the folks involved, and my partners are not interchangeable. I’m not keen on the implicit “go away without a fuss after we’re done” part of the equation.

That’s not the problem either.


The problem is basic. In the transcripts that came out on the news after the state of Mr. Spitzer’s penis was uncovered, it was claimed that he had a fondness for asking those people with whom he exchanged sex for money to do unusual things, or even “dangerous” things. Now, I have no idea what that means, and the folks who do know aren’t telling. I’ve probably got a wildly miscalibrated scale for evaluating unusual and dangerous things in bed; when I think “unusual and dangerous,” things like fire, knives, and trying to tell one of my sweeties how to live her life spring to mind. For other folk, maybe it’s more a question of letting her be on top without a condom, I dunno.

But anyway, that’s getting close to the problem. Forget issues of projecting power through money; forget issues of the thrill of getting some on the sly. If it’s “unusual and dangerous” our boy Eliot wanted, one might reasonably surmise he wasn’t getting it at home.

Which probably means he wasn’t asking for it at home. In fact, it would surprise me not one whit to learn that if his wife ever discovered the whatever-it-is that Mr. Spitzer is into, she’d be startled, shocked, stunned, surprised, and other words beginning with the letter “s”. My hunch? Eliot’s been kinked for quite some time, and his wife of twenty years now (anyone want to take any bets on the two of them hitting twenty-one?) doesn’t know a goddamn thing about him.

So when faced with an urge for the unusual and dangerous, he hired a stand-in.

It’s hard to know where to start with this. Actually, no, I take that back. It’s easy to know where to start with this. Let’s start with how goddamn fucking ridiculous it is to spend two decades, or more than one-quarter of one’s normal life expectancy, with a person that you don’t even talk to about yourself. Seriously. What do these two talk about, the weather? Jesus fucking Christ on a pogo stick, this isn’t rocket science. You want to get down and get jiggy with the trapeze and the Day-Glow Silly String, say so! Partnerships are built on communication and trust, you know?

I have conversations–my God, do I have conversations–with folks all over the place about this. I get emails from my Web site, I see folks posting in net forums and on mailing lists: “I know communication is important, but…”

There’s no “but.” The correct way to punctuate the phrase “I know communication is important” is with a period at the end. That’s it. No fucking “but.” The “but” that inevidably follows always ends up boiling down to “but it feels awkward to expose myself to my partner and I’m scared of feeling awkward” or “but what if my partner says no” or “but what if rabid shapeshifting werewolf-aliens from the planet Zolog-9 come and carry us away for unspeakable experiments aboard the mothership” or some other real-seeming but ultimately kind of silly thing that’s a damn stupid reason to undermine and corrode the very foundation of a romantic relationship.

There’s also the little niggling subtext: “Of course I wouldn’t want to tell my partner about it, becausewhat if she thinks poorly of me? But it’s cool to tell a prostit–err, call girl, ’cause, y’know, it doesn’t matter what they think.” And that’s a little creepy, but kinda beside the point.

Now, there’s a universal rule of life that I always tell folks: You can’t reasonably expect to get what you want if you don’t ask for what you want. Clearly, I’m wrong; you can’t reasonably expect to get what you want if you don’t ask for what you want or you don’t have a pile of money you can use to buy what you want from someone whose opinion on the subject doesn’t matter to you, more like. But that’s beside the point, too. The truth is, that’s the real issue at work here. Mr. Spitzer went elsewhere–with the taxpayers’ money, Eliot, you naughty boy–quite likely because he couldn’t find the guts to ask for what he wanted from the one person who had pledged her love and commitment to him.

And that’s pretty damn stupid, if you ask me. Which, I realize, nobody has, but still.

At least we can trust American pop culture to get it right. In all the media circus surrounding this whole sad tale of a powerful political figure’s penis, only VH-1’s coverage has got it right:

Some thoughts on socialism and capitalism

This post has been rattling around in my head for a while, and was finally prompted by a post left in sterno‘s journal.

Now, before we get started, let me make one thing abundantly clear. I am a capitalist. I am probably the biggest capitalist you will likely ever meet. For more than a decade, I have made money directly from the work that I do, without relying on an outside business for my full support. Even now, as a salaried employee, I am a minority partner in the company which employees me, and I have at least two other business ventures running at any given time, one of which typically pays my rent.

I am not a socialist, nor do I believe socialism is anything but a broken and inherently unworkable economic system which does little besides deprive those citizens who live under it from benefitting from their own labor.

However, I am also a fan of government oversight of business, and of environmental and social restrictions on the actions of business.

“But Franklin,” you say, “how can that be? Isn’t that a form of socialism? Isn’t the whole point of capitalism the notion that market efficiencies work best when unencumbered by government intrusion?”

And the answer is “no,” because without such oversight, businesses tend to adopt a weird sort of pancake socialism–an inverted socialist system where profit is concentrated, but the costs of doing business and the risks associated with business practices are socialized.


There are tangible risks associated with environmentally or socially negligent behavior. Take, for example, a hypothetical chemical business that produces acetic acid, and as a byproduct produces methylmercury. Methylmercury is difficult and expensive to contain and to get rid of safely, so let us assume that the business disposes of it by dumping it into a lake. (This is not entirely hypothetical; a company doing just that in the Japanese city of Minamata in 1956 caused the largest case of mass mercury poisoning on record.)

The business that pumps methyl mercury into a lake is increasing the risk of serious health consequences for the people living round that lake. Those risks come with a significant dollar value attached; in this hypothetical case, the dollar value may be the cost associated with medical treatment, the cost incurred by lost productivity, and the cost inflicted on the local fishing industry as the industry collapses.

These costs are not borne by the business that did the dumping. The business is not really a capitalistic enterprise; it keeps the profits from its various activities, sure, but it does not pay the costs associated with the risks incurred by its business methods. Those risks are socialized–spread across the population.

In a conventional socialist arrangement, the one everyone thinks of when they think “socialism,” a worker works but does not keep the profits from his work. The profits–the results of his labor–are distributed across the population.

In the inverted socialism that comes along with lax regulation of environmental and social practices, a business keeps the profits from its work, but the costs associated with doing business are distributed across the population. This artificially increases the business’ profit; the socialization of risk means that some of what would otherwise be the business’ expense are paid by the community–even those who do not work for that business–and by other businesses impacted by the first business’ practice. Profit is not distributed, but cost and risk are.

This socialization of risk amounts to a subsidy paid by the people surrounding the business which inflates the business’ worth and increases its profits without increasing production or efficiency. Because the risks are subsidized and the costs associated with those risks are socialized, businesses which operate in a manner that socializes risk end up at a competitive advantage over businesses which shoulder the full costs of doing business.

It need not even be something as blatant as dumping toxic byproducts into the environment, and thereby socializing the risk and forcing others to assume the costs associated with that risk. This kind of “pancake socialism,” or inverted socialization of risk, may happen even in the service sector. For example, when an independent mortgage writer writes a mortgage, he is paid a percentage of the value of that mortgage, and at that point he’s done. The company who underwrites the mortgage, which may or may not own the mortgage throughout its entire life, shoulders the risk associated with the mortgage, but the guy who initially sold it has a different set of motivations. He is paid for every mortgage he writes, regardless of whether or not the underwriter profits from it or it goes into default. Therefore, his incentive rests only with writing the maximum number of mortgages possible, for the highest dollar value possible. He has very powerful incentive to issue risky mortgages, to artificially inflate the ability of the person buying the mortgage to pay, and to minimize the apparent costs associated with the mortgage. In fact, absent any kind of oversight, he may even have incentive to intentionally mislead his clients about the cost, and even to write mortgages which he knows damn well his clients can not afford. He does not bear the costs associated with the risk incurred by the mortgage underwriter.

The mortgage underwriter is in a similar position. It profits from writing mortgages; obviously, if the number of mortgages which go into default reaches a certain threshold, the underwriter will fail, but the more mortgages it underwrites in the short term, the more profit it generates, Particularly when it socializes its own risk by then turning around and selling those mortgages to others.

The total amount of money available to finance mortgages is finite. If a large number of mortgages go into default, this can diminish the pool of money available, which ends up dragging down much of the rest of the economy. A society which permits mortgage lenders to operate with little oversight is a socialist society; it encourages the socialization of risk by separating the risk from the profits. If the housing industry fails…well, the mortgage agents and the owners of mortgage issuing companies still made their millions; they’re set. The costs of the failure are not born by those individuals; the costs are socialized, and end up being paid by everyone, regardless of whether or not they benefitted from the mortgages.


“Socialism” is something of a dirty word in American culture. The best way to defeat any policy is to label it “socialist.” Yet we are a highly socialist society; it’s just that we socialize risk, and we socialize cost, but we don’t socialize profit. Businesses that work without oversight are socialized businesses; they expect everyone else to pay for their operational costs, while still concentrating profits internally.

This imposes significant barriers to entry into many industries; the socialization of risk benefits large businesses over small businesses. It makes up a hidden cost subsidy for businesses in areas where oversight is poor when they compete with businesses in areas where those businesses must pay the full cost of doing business, including the cost of waste management and risk management.

And you know what? As a capitalist, I think that’s fucked up.