I for one welcome our new AI overlords

I’ve been thinking a lot about machine learning lately. Take a look at these images:

Portraits of people who don't exist

These people do not exist. They’re generated by a neural net program at thispersondoesnotexist.com, a site that uses Nvidia’s StyleGAN to generate images of faces.

StyleGAN is a generative adversarial network, a neural network that was trained on hundreds of thousands of photos of faces. The network generated images of faces, which were compared with existing photos by another part of the same program (the “adversarial” part). If the matches looked good, those parts of the network were strengthened; if not, they were weakened. And so, over many iterations, its ability to create faces grew.

If you look closely at these faces, there’s something a little…off about them. They don’t look quiiiiite right, especially where clothing is concerned (look at the shoulder of the man in the upper left).

Still, that doesn’t prevent people from using fake images like these for political purposes. The “Hunter Biden story” was “broken” by a “security researcher” who does not exist, using a photo from This Person Does Not Exist, for example.

There are ways you can spot StyleGAN generated faces. For example, the people at This Person Does Not Exist found that the eyes tended to look weird, detached from the faces, so the researchers fixed the problem in a brute-force but clever way: they trained the Style GAN to put the eyes in the same place on every face, regardless of which way it was turned. Faces generated at TPDNE always have the major features in the same place: eyes the same distance apart, nose in the same place, and so on.

StyleGAN fixed facial layout

StyleGAN can also generate other types of images, as you can see on This Waifu Does Not Exist:

waifu

Okay, so what happens if you train a GAN on images that aren’t faces?

That turns out to be a lot harder. The real trick there is tagging the images, so the GAN knows what it’s looking at. That way you can, for example, teach it to give you a building when you ask it for a building, a face when you ask it for a face, and a cat when you ask it for a cat.

And that’s exactly what the folks at WOMBO have done. The WOMBO Dream app generates random images from any words or phrases you give it.

And I do mean “any” words or phrases.

It can generate cityscapes:

Buildings:

Landscapes:

Scenes:

Body horror:

Abstract ideas:

On and on, endless varieties of images…I can play with it for hours (and I have!).

And believe me when I say it can generate images for anything you can think of. I’ve tried to throw things at it to stump it, and it’s always produced something that looks in some way related to whatever I’ve tossed its way.

War on Christmas? It’s got you covered:

I’ve even tried “Father Christmas encased in Giger sex tentacle:”

Not a bad effort, all things considered.

But here’s the thing:

If you look at these images, they’re all emotionally evocative; they all seem to get the essence of what you’re aiming at, but they lack detail. The parts don’t always fit together right. “Dream” is a good name: the images the GAN produces are hazy, dreamlike, insubstantial, without focus or particular features. The GAN clearly does not understand anything it creates.

And still, if artist twenty years ago had developed this particular style the old-fashioned way, I have no doubt that he or she or they would have become very popular indeed. AI is catching up to human capability in domains we have long thought required some spark of human essence, and doing it scary fast.

I’ve been chewing on what makes WOMBO Dream images so evocative. Is it simply promiscuous pattern recognition? The AI creating novel patterns we’ve never seen before by chewing up and spitting out fragments of things it doesn’t understand, causing us to dig for meaning where there isn’t any?

Given how fast generative machine learning programs are progressing, I am confident I will live to see AI-generated art that is as good as anything a human can do. And yet, I still don’t think the machines that create it will have any understanding of what they’re creating.

I’m not sure how I feel about that.

Some (More) Thoughts on Brain Modeling and the Coming Geek Rapture

The notion of “uploading”–analyzing a person’s brain and then modeling it, neuron by neuron, in a computer, thereby forever preserving that person’s knowledge and consciousness–is a fixture of transhumanist thought. In fact, self-described “futurists” like Ray Kurzweil will gladly expound at great length about how uploading and machine consciousness are right around the corner, and Any Day Now we will be able to live forever by copying ourselves into virtual worlds.

I’ve written extensively before about why I think that’s overly optimistic, and why Ray Kurzweil pisses me off. Our understanding of the brain is still remarkably poor–for example, we’re only just now learning how brain cells called “glial cells” are involved in the process of cognition–and even when we do understand the brain on a much deeper level, the tools for being able to map the connections between the cells in the brain are still a long way off.

In that particular post, I wrote that I still think brain modeling will happen; it’s just a long way off.

Now, however, I’m not sure it will ever happen at all.


I love cats.

Many people love cats, but I really love cats. It’s hard for me to see a cat when I’m out for a walk without wanting to make friends with it.

It’s possible that some of my love of cats isn’t an intrinsic part of my personality, in the sense that my personality may have been modified by a parasite commonly found in cats.

This is the parasite, in a color-enhanced scanning electron micrograph. Pretty, isn’t it? It’s called Toxoplasma gondii. It’s a single-celled organism that lives its life in two stages, growing to maturity inside the bodies of rats, and reproducing in the bodies of cats.

When a rat is infected, usually by coming into contact with cat droppings, the parasite grows but doesn’t reproduce. Its reproduction can only happen in a cat, which becomes infected when it eats an infected rat.

To help ensure its own survival, the parasite does something amazing. It controls the rat’s mind, exerting subtle changes to make the rat unafraid of cats. Healthy rats are terrified of cats; if they smell any sign of a cat, even a cat’s urine, they will leave an area and not come back. Infected rats lose that fear, which serves the parasite’s needs by making it more likely the rat will be eaten by a cat.

Humans can be infected by Toxoplasma gondii, but we’re a dead end for the parasite; it can’t reproduce in us.

It can, however, still work its mind-controlling magic. Infected humans show a range of behavioral changes, including becoming more generous and less bound by social mores and customs. They also appear to develop an affinity for cats.

There is a strong likelihood that I am a Toxoplasma gondii carrier. My parents have always owned cats, including outdoor cats quite likely to have been exposed to infected rats. So it is quite likely that my love for cats, and other, more subtle aspects of my personality (bunny ears, anyone?), have been shaped by the parasite.

So, here’s the first question: If some magical technology existed that could read the connections between all of my brain cells and copy them into a computer, would the resulting model act like me? If the model didn’t include the effects of Toxoplasma gondii infection, how different would that model be from who I am? Could you model me without modeling my parasites?


It gets worse.

The brain models we’ve built to date are all constructed from generic building blocks. We model neurons as though they are variations on a common theme, responding pretty much the same way. These models assume that the neurons in Alex’s head behave pretty much the same way as the neurons in Bill’s head.

To some extent, that’s true. But we’re learning that there can be subtle genetic differences in the way that neurons respond to different neurotransmitters, and these subtle differences can have very large effects on personality and behavior.

Consider this protein. It’s a model of a protein called AVPR-1a, which is used in brain cells as a receptor for the neurotransmitter called vasopressin.

Vasopressin serves a wide variety of different functions. In the body, it regulates water retention and blood pressure. In the brain, it regulates pair-bonding, stress, aggression, and social interaction.

A growing body of research shows that human beings naturally carry slightly different forms of the gene that produce this particular receptor, and that these tiny genetic differences result in tiny structural differences in the receptor which produce quite significant differences in behavior. For example, one subtle difference in the gene that produces this receptor changes the way that men bond to partners after sex; carriers of this particular genetic variation are less likely to experience intense pair-bonding, less likely to marry, and more likely to divorce if they do marry.

A different variation in this same gene produces a different AVPR-1a receptor that is strongly linked to altruistic behavior; people with that particular variant are far more likely to be generous and altruistic, and the amount of altruism varies directly with the number of copies of a particular nucleotide sequence within the gene.

So let’s say that we model a brain, and the model we use is built around a statistical computation for brain activation based on the most common form of the AVPR-1a gene. If we model the brain of a person with a different form of this gene, will the model really represent her? Will it behave the way she does?

The evidence suggests that, no, it won’t. Because subtle genetic variations can have significant behavioral consequences, it is not sufficient to upload a person using a generic model. We have to extend the model all the way down to the molecular level, modeling tiny variations in a person’s receptor molecules, if we wish to truly upload a person into a computer.

And that leads rise to a whole new layer of thorny moral issues.


There is a growing body of evidence that suggests that autism spectrum disorders are the result in genetic differences in neuron receptors, too. The same PDF I linked to above cites several studies that show a strong connection between various autism-spectrum disorders and differences in receptors for another neurotransmitter, oxytocin.

Vasopressin and oxytocin work together in complex ways to regulate social behavior. Subtle changes in production, uptake, and response to either or both can produce large, high-level changes in behavior, and specifically in interpersonal behavior–arguably a significant part of what we call a person’s “personality.”

So let’s assume a magic brain-scanning device able to read a person’s brain state and a magic computer able to model a person’s brain. Let’s say that we put a person with Asperger’s or full-blown autism under our magic scanner.

What do we do? Do we build the model with “normal” vasopressin and oxytocin receptors, thereby producing a model that doesn’t exhibit autism-spectrum behavior? If we do that, have we actually modeled that person, or have we created an entirely new entity that is some facsimile of what that person might be like without autism? Is that the same person? Do we have a moral imperative to model a person being uploaded as closely as possible, or is it more moral to “cure” the autism in the model?


In the previous essay, I outlined why I think we’re still a very long ways away from modeling a person in a computer–we lack the in-depth understanding of how the glial cells in the brain influence behavior and cognition, we lack the tools to be able to analyze and quantify the trillions of interconnections between neurons, and we lack the computational horsepower to be able to run such a simulation even if we could build it.

Those are technical objections. The issue of modeling a person all the way down to the level of genetic variation in neurotransmitter and receptor function, however, is something else.

Assuming we overcome the limitations of the first round of problems, we’re still left with the fact that there’s a lot more going on in the brain than generic, interchangeable neurons behaving in predictable ways. To actually copy a person, we need to be able to account for genetic differences in the structure of receptors in the brain…

…and even if we do that, we still haven’t accounted for the fact that organisms like Toxoplasma gondii can and do change the behavior of the brain to suit their own ends. (I would argue that a model of me that was faithful clear down to the molecular level probably wouldn’t be a very good copy if it didn’t include the effects that the parasite have had on my personality–effects that we still have no way to quantify.)

Sorry, Mr. Kurzweil, we’re not there yet, and we’re not likely to be any time soon. Modeling a specific person in a brain is orders of magnitude harder than you think it is. At this point, I can’t even say with certainty that I think it will ever happen.

Transhumanism, Technology, and the da Vinci Effect

[Note: There is a followup to this essay here]

Ray Kurzweil pisses me off.

His name came up last night at Science Pub, which is a regular event, hosted by a friend of mine, that brings in guest speakers on a wide range of different science and technology related topics to talk in front of an audience at a large pub. There’s beer and pizza and really smart scientists talking about things they’re really passionate about, and if you live in Portland, Oregon (or Eugene or Hillsboro; my friend is branching out), I can’t recommend them enough.

Before I can talk about why Ray Kurzweil pisses me off–or, more precisely, before I can talk about some of the reasons Ray Kurzweil pisses me off, as an exhaustive list would most surely strain my patience to write and your patience to read–it is first necessary to talk about what I call the “da Vinci effect.”


Leonardo da Vinci is, in my opinion, one of the greatest human beings who has ever lived. He embodies the best in our desire to learn; he was interested in painting and sculpture and anatomy and engineering and just about every other thing worth knowing about, and he took time off of creating some of the most incredible works of art the human species has yet created to invent the helicopter, the armored personnel carrier, the barrel spring, the Gatling gun, and the automated artillery fuze…pausing along the way to record innovations in geography, hydraulics, music, and a whole lot of other stuff.

However, most of his inventions, while sound in principle, were crippled by the fact that he could not conceive of any power source other than muscle power. The steam engine was still more than two and a half centuries away; the internal combustion engine, another half-century or so after that.

da Vinci had the ability to anticipate the broad outlines of some really amazing things, but he could not build them, because he lacked one essential element whose design and operation were way beyond him or the society he lived in, both in theory and in practice.

I tend to call this the “da Vinci effect”–the ability to see how something might be possible, but to be missing one key component that’s so far ahead of the technology of the day that it’s not possible even to hypothesize, except perhaps in broad, general terms, how it might work, and not possible even to anticipate with any kind of accuracy how long it might take before the thing becomes reachable.


Charles Babbage’s Difference Engine is another example of an idea whose realization was held back by the da Vinci effect.

Babbage reasoned–quite accurately–that it was possible to build a machine capable of mathematical computation. He also reasoned that it would be possible to construct such a machine in such a way that it could be fed a program–a sequence of logical steps, each representing some operation to carry out–and that on the conclusion of such a program, the machine would have solved a problem. Ths last bit differentiated his conception of a computational engine from other devices (such as the Antikythera mechanism) which were built to solve one particular problem and could not be programmed.

The technology of the time, specifically with respect to precision metal casting, meant his design for a mechanical computer was never realized in his lifetime. Today, we use devices that operate by principles he imagined every day, but they aren’t mechanical; in place of gears and levers, they use gates that control the flow of electrons–something he could never have envisioned given the understanding of his time.


One of the speakers at last night’s Science Pub was Dr. Larry Sherman, a neurobiologist and musician who runs a research lab here in Oregon that’s currently doing a lot of cutting-edge work in neurobiology. He’s one of my heroes1; I’ve seen him present several times now, and he’s a fantastic speaker.

Now, when I was in school studying neurobiology, things were very simple. You had two kinds of cells in your brain: neurons, which did all the heavy lifting involved in the process of cognition and behavior, and glial cells, which provided support for the neurons, nourished them, repaired damage, and cleaned up the debris from injury or dead cells.

There are a couple of broad classifications for glial cells: astrocytes and microglia. Astrocytes, shown in green in this picture, provide a physical scaffold to hold neurons (in blue) in place. They wrap the axons of neurons in protective sheaths and they absorb nutrients and oxygen from blood vessels, which they then pass on to the neurons. Microglia are cells that are kind of like little amoebas; hey swim around in your brain locating dead or dying cells, pathogens, and other forms of debris, and eating them.

So that’s the background.


Ray Kurzweil is a self-styled “futurist,” transhumanist, and author. He’s also a Pollyanna with little real rubbber-on-road understanding of the challenges that nanotechnology and biotechnology face. He talks a great deal about AI, human/machine interface, and uploading–the process of modeling a brain in a computer such that the computer is conscious and aware, with all the knowledge and personality of the person being modeled.

He gets a lot of it wrong, but it’s the last bit he gets really wrong. Not the general outlines, mind you, but certainly the timetable. He’s the guy who looks at da Vinci’s notebook and says “Wow, a flying machine? That’s awesome! Look how detailed these drawings are. I bet we could build one of these by next spring!”

Anyway, his name came up during the Q&A at Science Pub, and I kind of groaned. Not as much as I did when Dr. Sherman suggested that a person whose neurons had been replaced with mechanical analogues wouldn’t be a person any more, but I groaned nonetheless.

Afterward, I had a chance to talk to Dr. Sherman briefly. The conversation was short; only just long enough for him to completely blow my mind, make me believe that a lot of ideas about uploading are limited by the da Vinci effect, and to suggest that much brain modeling research currently going on is (in his words) “totally wrong”.


It turns out that most of what I was taught about neurobiology was utterly wrong. Our understanding of the brain has exploded in the last few decades. We’ve learned that people can and do grow new brain cells all the time, throughout their lives. And we’ve learned that the glial cells do a whole lot more than we thought they did.

Astrocytes, long believed to be nothing but scaffolding and cafeteria workers, are strongly implicated in learning and cognition, as it turns out. They not only support the neurons in your brain, but they guide the process of new neural connections, the process by which memory and learning work. They promote the growth of new neural pathways, and they also determine (at least to some degree) how and where those new pathways form.

In fact, human beings have more different types of astrocytes than other vertebrates do. Apparently, according to my brief conversation with Dr. Sherman, researchers have taken human astrocytes and implanted them in developing mice, and discovered an apparent increase in cognitive functions of those mice even though the neurons themselves were no different.

And, more recently, it turns out that microglia–the garbage collectors and scavengers of the brain–can influence high-order behavior as well.

The last bit is really important, and it involves hox genes.


A quick overview of hox genes. These are genes which control the expression of other genes, and which are involved in determining how an organism’s body develops. You (and monkeys and mice and fruit flies and earthworms) have hox genes–pretty much the same hox genes, in fact–that represent an overall “body image plan”. The do things like say “Ah, this bit will become a torso, so I will switch on the genes that correspond to forming arms and legs here, and switch off the genes responsible for making eyeballs or toes.” Or “This bit is the head, so I will switch on the eyeball-forming genes and the mouth-forming genes, and switch off the leg-forming genes.”

Mutations to hox genes generally cause gross physical abnormalities. In fruit flies, incoreect hox gene expression can cause the fly to sprout legs instead of antennae, or to grow wings from strange parts of its body. In humans, hox gene malfunctions can cause a number of really bizarre and usually fatal birth defects–growing tiny limbs out of eye sockets, that sort of thing.

And it appears that a hox gene mutation can result in obsessive-compulsive disorder.

And more bizarrely than that, this hox gene mutation affects the way microglia form.


Think about how bizarre that is for a minute. The genes responsible for regulating overall body plan can cause changes in microglia–little amoeba scavengers that roam around in the brain. And that change to those scavengers can result in gross high-level behavioral differences.

Not only are we not in Kansas any more, we’re not even on the same continent. This is absolutely not what anyone would expect, given our knowledge of the brain even twenty years ago.

Which brings us back ’round to da Vinci.


Right now, most attempts to model the brain look only at the neurons, and disregard the glial cells. Now, there’s value to this. The brain is really (really really really) complex, and just developing tools able to model billions of cells and hundreds or thousands of billions of interconnections is really, really hard. We’re laying the foundation, even with simple models, that lets us construct the computational and informatics tools for handling a problem of mind-boggling scope.

But there’s still a critical bit missing. Or critical bits, really. We’re missing the computational bits that would allow us to model a system of this size and scope, or even to be able to map out such a system for the purpose of modeling it. A lot of folks blithely assume Moore’s Law will take care of that for us, but I’m not so sure. Even assuming a computer of infinite power and capability, if you want to upload a person, you still have the task of being able to read the states and connection pathways of many billions of very small cells, and I’m not convinced we even know quite what those tools look like yet.

But on top of that, when you consider that we’re missing a big part of the picture of how cognition happens–we’re looking at only one part of the system, and the mechanism by which glial cells promote, regulate, and influence high-level cognitive tasks is astonishingly poorly understood–it becomes clear (at least to me, anyway) that uploading is something that isn’t going to happen soon.

We can, like da Vinci, sketch out the principles by which it might work. There is nothing in the laws of physics that suggest it can’t be done, and in fact I believe that it absolutely can and will, eventually, be done.

But the more I look at the problem, the more it seems to me that there’s a key bit missing. And I don’t even think we’re in a position yet to figure out what that key bit looks like, much less how it can be built. It may be possible that when we do model brains, the model isn’t going to look anything like what we think of as a conventional computer at all, much like when we built general-purpose programmable devices, they didn’t look like Babbage’s difference engines at all.


1 Or would be, if it weren’t for the fact that he rejects personhood theory, which is something I’m still a bit surprised by. If I ever have the opportunity to talk with him over dinner, I want to discuss personhood theory with him, oh yes.

Some thoughts on complexity and human consciousness

A couple weeks ago, I decided to take out the trash. On the way to the trash can, I thought, “I should clean out the kitty litter.” Started to clean the litterbox, and thought, “No, actually, I should completely change the litter.” Started changing the litter, then realized that the cat had dragged some of it out on the floor. “Ah, I should get out the vacuum,” thought I.

Next thing you know, I’m totally cleaning the apartment, one end to the other.

On my way out to the dumpster, I started thinking about hourglasses. And that’s really what this post is about.


If you have ever watched the sand falling in an hourglass, you know how it goes. The sand in the bottom of the hourglass builds up and up and up, then collapses into a lower, wider pile; then as more sand streams down, it builds up and up and up again until it collapses again.

I don’t think any reasonable person would say that a pile of sand has consciousness or free will. It is a deterministic system; its behavior is not random at all, but is strictly determined by the immutable actions of physical law.

Yet in spite of that, it is not predictable. We can not model the behavior of the sand streaming through the hourglass and predict exactly when each collapse will happen.

This illustrates a very interesting point; even the behavior of a simple system governed by only a few simple rules can be, at least to some extent, unpredictable. We can tell what the sand won’t do–it won’t suddenly start falling up, or invade France–but we can’t predict past a certain limit of resolution what it will do, in spite of the fact that everything it does is deterministic.

The cascading sequence of events that started with “I should take out the trash” and ended with cleaning the apartment felt like a sudden, unexpected collapse of my own internal motivational pile of sand. And that led, as I carried bags of trash out to the dumpster, to thoughts of unpredictable deterministic systems, and human behavior.


The sand pouring through the hourglass is an example of a Lorenz system. Such a system is a chaotic system that’s completely deterministic, yet exhibits very complex behavior that is exquisitely sensitive to initial conditions. If you take just one of the grains of sand out of the pile forming in the bottom of the hourglass, flip it upside down, and put it back where it was, the sand will now have a different pattern of collapses. There’s absolutely no randomness to it, yet we can’t predict it because predicting it requires modeling every single action of every single individual grain, and if you change just one grain of sand just the tiniest bit, the entire system changes.

Now, the human brain is an extraordinarily complex system, much more complex both structurally and organizationally than a pile of sand, and subject to more complex laws. It’s also reflexive; a brain can store information, and its future behavior can be influenced not only by its state and the state of the environment it’s in, but also by the stored memories of past behavior.

So it’s no surprise that human behavior is complex and often unpredictable. But is it deterministic? Do we actually have free will, or is our behavior entirely determined by the operation of immutable natural law, with neither randomness nor deviance from a single path dictated by that immutable natural law.

We really like to believe that we have free will, and our behavior i subject to personal choice. But is it?


In the past, some Protestant denominations believed in pre-ordination, the notion that our lives and our choices were all determined in advance by an omniscient and omnipotent god, who made our decisions for us and then cast us into hell when those decisions were not the right ones. (The Calvinist joy in the notion that some folks were pre-destined to go to hell was somewhat tempered by their belief that some folks were destined to go to heaven, but on the whole they took great delight in the idea of a fiery pit awaiting the bulk of humanity.)

The kind of determinism I’m talking about here is very different. I’m not suggesting that our paths are laid out before us in advance, and certainly not that they are dictated by an outside supernatural agency; rather, what I’m saying is that we may be deterministic state machines. Fearsomely complicated, reflexive deterministic state machines that interact with the outside world and with each other in mind-bogglingly complex ways, and are influenced by the most subtle and tiny of conditions, but deterministic state machines nonetheless. We don’t actually make choices of free will; free will appears to emerge from our behavior because it is so complex and in many ways so unpredictable, but that apparent emergent behavior is not actually the truth.

An uncomfortable idea, and one that many people will no doubt find quite difficult to swallow.

We feel like we have free will. We feel like we make choices. And more than that, we feel as if the central core of ourselves, our stream of consciousness, is not dependent on our physical bodies, but comes from somewhere outside ourselves–a feeling which is all the more seductive because it offers us a way to believe in our own immortality and calm the fear of death. And anything which does that is an attractive idea indeed.

But is it true?


Some folks try to develop a way to believe that our behavior is not deterministic without resorting to the external or the supernatural. Mathematician Roger Penrose, for example, argues that consciousness is inherently dependent on quantum mechanics, and quantum mechanics is inherently non-deterministic. (I personally believe that his arguments amount to little more than half-baked handwaving, and that he has utterly failed to make a convincing, or even a plausible, argument in favor of any mechanism whatsoever linking self-awareness to quantum mechanics. To me, his arguments seem to come down to “I really, really, really, really want to believe that human beings are not deterministic, but I don’t believe in souls. See! Look over there! Quantum mechanics! Quantum mechanics! Chewbacca is a Wookie!” But that’s neither here nor there.)

Am I saying that the whole of human behavior is absolutely deterministic? No; there’s not (yet) enough evidence to support such an absolute claim. I am, however, saying that one argument often used to support the existence of free will–the fact that human being sometimes behave in surprising and unexpected ways that are not predictable–is not a valid argument. A system, even a simple system, can behave in surprising and unpredictable ways and still be entirely deterministic.


Ultimately, it does not really matter whether human behavior is deterministic or the result of free will. In many cases, humans seem to be happier, and certainly human society seems to function better, if we take the notion of free will for granted. In fact, and argument can be made that social systems depend for their effectiveness on the premise that human beings have free will; without that premise, ideas of legal accountability don’t make sense. So regardless of whether our behavior is deterministic or not, we need to believe that it is not in order for the legal systems we have made to be effective in influencing our behavior in ways that make our societies operate more smoothly.

But regardless of whether it’s important on a personal or a social level, I think the question is very interesting. And I do tend to believe that all the available evidence does point toward our behavior being deterministic.

And yes, this is the kind of shit that goes on in my head when I take out the trash. In fact, that’s a little taste of what it’s like to live inside my head all the time. I had a similar long chain of musings and introspections when I walked out to my car and saw it covered with pollen, which I will perhaps save for another post.

Steve Jobs is God

So last night, I went to bed very late. I don’t know if it was spending the entire day playing World of Warcraft, or eating little besides leftover Subway and frozen microwave dinners, or perhaps the fact that I was working on my Web site every time I was waiting for my mage to recover mana, but for some reason I was visited by the spirit of Steve Jobs in my dreams.

The dreams were so vivid that when I woke up, I could almost feel the presence of Steve there in my bedroom. I remember talking to the Great Mr. Jobs about the inside skinny at Apple, and learning some rather…remarkable things. A small part of our conversation:

Me: So when Apple switched from PowerPC processors to Intel processors, you made it possible for users to run their old PowerPC programs.

Steve: Yes. We created an emulation program called Rosetta, which emulates a PowerPc processor on an Intel processor.

Me: Other people have done the same thing before; there’s an open-source program called PearPC that runs Mac OS X on Intel computers. But it’s very slow. I’ve seen it run; it takes about half an hour to boot. How did you get Rosetta to run so fast?

Steve: Well, for technical reasons, emulating a RISC processor like a PowerPC on a CISC processor like the ones Intel makes is very difficult to do. At first, our emulation program was very slow, too.

But then we thought, what if the laws of physics are changed? Is it possible that under different fundamental laws, emulating a RISC processor on CISC architecture might be easy? So when our engineers started going down that path, we discovered we could get much better performance.

Me: Come again?

Steve: It’s quite simple, really. Rather than emulating a processor, what if Rosetta emulated an entire universe–one where the laws of physics made running PowerPC code on an Intel chip easy? We searched through a large number of parallel universes, and found one where the basic physical properties of the universe gave us the results we wanted.

Me: Wait a minute. Are you telling me that Rosetta doesn’t emulate a processor, it emulates an entire universe?

Steve: Exactly! We got the idea from watching The Matrix. When you launch a PowerPC application, Rosetta brings a new universe into being. This particular universe has non-Euclidean geometry; it turns out that Euclidean geometry is particularly bad for emulating RISC on CISC.

Within the laws of this universe, it’s easy to run PowerPC applications on the Intel processor found in all our current computers, like our best-selling iMac or our high-end Mac Pro.

The only drawback to this approach is memory. Emulating an entire universe within Mac OS X requires significant memory, which is why we recommend that our users who still find themselves running legacy PowerPC applications install at least two gigabytes of RAM. You can add more memory to your computer as a build-to-order option from the Apple store.

Me: And this actually works?

Steve: Oh, yes. Emulating an entire universe involves more overhead, of course, but the speed advantage you get by running RISC code on a CISC processor in non-Euclidean space more than makes up for it.

Me: I’ve noticed that when I keep my computer running for a long time, PowerPC apps can suddenly start to slow down.

Steve: Yes. We’ve observed that issue in our labs as well. It has to do with the formation of life in the parallel universe.

Me: What??!

Steve: If you let Rosetta run for long enough, eventually life will arise in the universe it creates. Because emulating the complex functions of life is a processor-intensive task, the performance of PowerPC applications can diminish over time.

It’s impossible to predict precisely when this slowdown will occur, because life doesn’t always arise at the same time or in the same way. We’ve found that on an eight-core Mac Pro system, it usually takes about three or four days for life to appear. On an iMac or a MacBook, it can take longer.

When this happens, we recommend that our users quit all their Rosetta applications. This causes Rosetta to destroy the parallel universe. When you launch a PowerPC application again, Rosetta will create a brand-new universe without life in it, and performance will be restored.

Me: Is any of this life…intelligent?

Steve: Sometimes. If you let your PowerPC applications run long enough, you may see intelligent life inside of Rosetta. When this happens, you’ll notice a significant slowdown of your PowerPC apps. We recommend that you quit all your apps at this point.

Me: Waitaminit–isn’t that murder?

Steve: Technically, no.

Me: But…you’re destroying an entire universe full of sapient life!

Steve: If you look at it that way, sure. We look at it as freeing system resources.

Me: But…it’s life!

Steve: Yes. We thought about releasing a game based on Rosetta, to compete with The Sims. The game would allow the user to interact with the parallel universe created by Rosetta and take a hand in shaping the life that formed there.

Me: And?

Steve: It turns out our market research shows that people only want to play with games that emulate human life. And not just human life, but middle-class twentieth-century American human life. Dealing with non-human sapience in a non-Euclidean universe didn’t have the same draw, so in the end we left it out of iLife ’08. However, we’re working on a smart backup feature for Leopard that we’re very excited about.

Me: Do you mean Time Machine?

Steve: Oh, no. That’s a data recovery app that folds the fabric of space-time to recover accidentally deleted files by grabbing them from a past version of this universe. The new smart backup feature uses the intelligence of sapient life in a parallel universe. But that’s all I can say about it right—

And then I woke up. No more WoW and frozen TV dinners for me, I think.