Today’s mission, should you choose to accept it…

Shelly challenged me with this one a while ago, and I was just reminded of it by a conversation in IM.

Let’s say that you suspect that you are living inside the Matrix–that is, the reality into which you were born and in which you and everyone else lives is a simulation.

Would it be possible, using only the tools and observations you have available within that simulated reality and without any referent to anything outside the simulation, to demonstrate conclusively that you were living in a simulation? And by the same token, would it be possible to demonstrate, if it turned out that you were not living inside the Matrix, to demonstrate that you were not? If so, how?

Got me stumped. Ideas?

64 thoughts on “Today’s mission, should you choose to accept it…

  1. this was descartes’s thought experiment, right? suppose you’re just a brain in a vat, fooled by an evil genius into thinking you’re perceiving things…

    i’ve never been able to get any further than he did. 😉

  2. this was descartes’s thought experiment, right? suppose you’re just a brain in a vat, fooled by an evil genius into thinking you’re perceiving things…

    i’ve never been able to get any further than he did. 😉

    • actually, though my memory is fuzzy, i seem to remember he used a more-or-less literal deus ex machina to get out of it, and i’m not sure i’m willing to buy into that. but alas, that leaves me stuck being certain only that i exist, which is cold comfort.

      • That was his whole point; again from
        http://www.philosophers.co.uk/cafe/phil_aug2001.htm

        “Our knowledge of the world had to be constructed of the most solid of certainties. Descartes thought that such material could be found in God. If God truly existed, the things that we clearly and distinctly perceive must surely exist, as God is benevolent and would not allow us to be deceived. A demonstration of the necessary existence of God is thus crucial to Descartes’ plan. It is consequently unfortunate that the arguments that Descartes advanced to prove the existence of God were all flawed.”

        So we end up with his Demon and no way to prove otherwise!

    • one of the flaws in “i think, therefore I am” is it proves nothing – you think you think because you think you remember it. Memory is fuzzy and cannot be relied on.

      Plus, “God wouldn’t let an evil demon do this” isn’t a convincing argument either

  3. This is essentially just a modern reformulation of the questions asked by Descartes. He used the concept of an evil demon. “Descartes supposed there was a powerful evil demon whose vocation it was to deceive us. Such a mighty creature would be well equipped to feed you sensations of all sorts. The world would seem to you as it does now, but there would be nothing correspondent to any of your perceptions.”

    Today we ask similar questions but using virtual reality instead of a demon.

    The end result of this line of thought is merely “I think, therefore I am”, and that can easily lead to solipsism. Nothing you believe you sense may have an objective reality. If you can’t trust your senses then you have nothing.

    What is reality? In my opinion it’s nothing more than shared consensus on what we believe we perceive.

    See http://www.philosophers.co.uk/cafe/phil_aug2001.htm for more on Descartes.

    • In my Intro to Philosophy class we had to write a paper on his argument, but switching “Evil Demon” with a Psychology Undergrad who shot you with a tranquilizer dart and kept your brain hooked up to a myriad of machines underneath the one of the Science buildings.

      It always made me chuckle because I’m a Psychology Undergrad.

      • Given the average Psych Undergrad isn’t a computer geek, they’d try to run it on Windows and so the first blue screen would give it away.

        Actually that idea’s pretty common in SF. James P Hogan’s “Realtime Interrupt” and Jack L. Chalker’s “Wonderland Gambit” trilogy, let alone the Well World books 🙂

  4. This is essentially just a modern reformulation of the questions asked by Descartes. He used the concept of an evil demon. “Descartes supposed there was a powerful evil demon whose vocation it was to deceive us. Such a mighty creature would be well equipped to feed you sensations of all sorts. The world would seem to you as it does now, but there would be nothing correspondent to any of your perceptions.”

    Today we ask similar questions but using virtual reality instead of a demon.

    The end result of this line of thought is merely “I think, therefore I am”, and that can easily lead to solipsism. Nothing you believe you sense may have an objective reality. If you can’t trust your senses then you have nothing.

    What is reality? In my opinion it’s nothing more than shared consensus on what we believe we perceive.

    See http://www.philosophers.co.uk/cafe/phil_aug2001.htm for more on Descartes.

  5. actually, though my memory is fuzzy, i seem to remember he used a more-or-less literal deus ex machina to get out of it, and i’m not sure i’m willing to buy into that. but alas, that leaves me stuck being certain only that i exist, which is cold comfort.

  6. That was his whole point; again from
    http://www.philosophers.co.uk/cafe/phil_aug2001.htm

    “Our knowledge of the world had to be constructed of the most solid of certainties. Descartes thought that such material could be found in God. If God truly existed, the things that we clearly and distinctly perceive must surely exist, as God is benevolent and would not allow us to be deceived. A demonstration of the necessary existence of God is thus crucial to Descartes’ plan. It is consequently unfortunate that the arguments that Descartes advanced to prove the existence of God were all flawed.”

    So we end up with his Demon and no way to prove otherwise!

  7. That question is touched upon briefly in Iain Banks’s latest novel, The Algebraist.

    One of the plot points in the novel was that the galactic civilization there claimed to have come up with the first ‘scientifically proved’ religion — that in fact our universe /was/ inside a simulation. And the various orders and implications that stemmed up from that — various missionary arms formed to ‘wake up’ everybody, or at least inform them that they’re living in a simulation. It was believed that once every sentient (or at least a critical mass) will be informed of this fact, the game would be up, and the simulation would… end? change into a new one? Something.
    The upshot of it was that the civilization was free to pursue the usual imperialist/expansionistic strategy.

    Now, since this is just a scifi book, it was just claimed with the usual hand-waving, and he didn’t go into /how/ they proved it.

    I would imagine that it’s a matter of physics research, rather than philosophy or sophistry. That is, once they get down low enough, to see various suspicious properties of fundamental particles that would imply that something is off, that we’re in a simulation.

    Of course, if it was an absolutely perfect or seamless simulation, there would be no way to tell. And why would we care, in that case?
    That’s the thing that the Matrix, and its ilk, do not address. If the simulation is perfect, why privilege the ‘real’ reality?
    You only want to escape if the simulation is substandard, or somehow harmful.

    • but that’s the rub. the matrix *wasn’t* perfect. recognising the errors are what led to Neo waking up, and arguably, the imperfections are what made Agent Smith so darned grouchy all the time. (as i remember it, Smith resented his need to exist to herd the “human infection” around. if the matrix were perfect, there’d be no reason for agents to exist.)

      it’s a very platonic idea that God is perfection, and Creation is inherently flawed due to the limitations of the material used (which, incidentally, is essentially the opposite assuption of the Intelligent Design folks, isn’t it?). it follows that things generated by beings who are a part of creation would have compounded imperfection.

      my theory is that when beings from Universe 1.0 create SimUniverse Online, the simulation fundamentally cannot be as seamless as the original, and beings from U1 who are inserted into SUO will eventually notice, since they are by nature more perfect. Maybe it’ll take advanced partical research, or maybe all of a sudden you cat will bluescreen one day. My gut, for no particular reason, says that the proof will present obviously, spontaneously, and self-evidently… but don’t ask me to argue why 😉 probably too much reading of the Christian Revelation mythos.

      it;s the ultimate cataclysmic event, isn’t it? the universe seems seamless and perfect until it isn’t. If you say that there’s no reason to question a perfect sim, and i say that discovery of the sim is a foregone conclusion… *laughs* i guess we haven’t actually gotten anywhere except to spread the message of “Relax!”

  8. That question is touched upon briefly in Iain Banks’s latest novel, The Algebraist.

    One of the plot points in the novel was that the galactic civilization there claimed to have come up with the first ‘scientifically proved’ religion — that in fact our universe /was/ inside a simulation. And the various orders and implications that stemmed up from that — various missionary arms formed to ‘wake up’ everybody, or at least inform them that they’re living in a simulation. It was believed that once every sentient (or at least a critical mass) will be informed of this fact, the game would be up, and the simulation would… end? change into a new one? Something.
    The upshot of it was that the civilization was free to pursue the usual imperialist/expansionistic strategy.

    Now, since this is just a scifi book, it was just claimed with the usual hand-waving, and he didn’t go into /how/ they proved it.

    I would imagine that it’s a matter of physics research, rather than philosophy or sophistry. That is, once they get down low enough, to see various suspicious properties of fundamental particles that would imply that something is off, that we’re in a simulation.

    Of course, if it was an absolutely perfect or seamless simulation, there would be no way to tell. And why would we care, in that case?
    That’s the thing that the Matrix, and its ilk, do not address. If the simulation is perfect, why privilege the ‘real’ reality?
    You only want to escape if the simulation is substandard, or somehow harmful.

  9. In my Intro to Philosophy class we had to write a paper on his argument, but switching “Evil Demon” with a Psychology Undergrad who shot you with a tranquilizer dart and kept your brain hooked up to a myriad of machines underneath the one of the Science buildings.

    It always made me chuckle because I’m a Psychology Undergrad.

  10. Given the average Psych Undergrad isn’t a computer geek, they’d try to run it on Windows and so the first blue screen would give it away.

    Actually that idea’s pretty common in SF. James P Hogan’s “Realtime Interrupt” and Jack L. Chalker’s “Wonderland Gambit” trilogy, let alone the Well World books 🙂

  11. but that’s the rub. the matrix *wasn’t* perfect. recognising the errors are what led to Neo waking up, and arguably, the imperfections are what made Agent Smith so darned grouchy all the time. (as i remember it, Smith resented his need to exist to herd the “human infection” around. if the matrix were perfect, there’d be no reason for agents to exist.)

    it’s a very platonic idea that God is perfection, and Creation is inherently flawed due to the limitations of the material used (which, incidentally, is essentially the opposite assuption of the Intelligent Design folks, isn’t it?). it follows that things generated by beings who are a part of creation would have compounded imperfection.

    my theory is that when beings from Universe 1.0 create SimUniverse Online, the simulation fundamentally cannot be as seamless as the original, and beings from U1 who are inserted into SUO will eventually notice, since they are by nature more perfect. Maybe it’ll take advanced partical research, or maybe all of a sudden you cat will bluescreen one day. My gut, for no particular reason, says that the proof will present obviously, spontaneously, and self-evidently… but don’t ask me to argue why 😉 probably too much reading of the Christian Revelation mythos.

    it;s the ultimate cataclysmic event, isn’t it? the universe seems seamless and perfect until it isn’t. If you say that there’s no reason to question a perfect sim, and i say that discovery of the sim is a foregone conclusion… *laughs* i guess we haven’t actually gotten anywhere except to spread the message of “Relax!”

  12. I can’t help myself…

    Lo! And the Word of the Old Aeon was “Don’t Panic!”, which was torn with the birthcry of the New Aeon, and the word “Relax!” echoed through the universe.

    I also read too much revalatory ocult mythos 😉

  13. It’s been a long time since I read Plato and Descartes, but going back farther than that, I remember a discussion with my father. We were discussing the development of modern astronomy. The sense of the discussion was that determining ultimate reality is very difficult. If we’re living in a matrix, it’s our reality and we can’t determine otherwise because we have no clues, no data.

    You can’t prove something for which there is no data, no yardstick, and no way of developing either. Descartes’ cogito ergo sum may have breen brilliant, but Locke’s analysis of the basis of knowledge was more useful.

  14. It’s been a long time since I read Plato and Descartes, but going back farther than that, I remember a discussion with my father. We were discussing the development of modern astronomy. The sense of the discussion was that determining ultimate reality is very difficult. If we’re living in a matrix, it’s our reality and we can’t determine otherwise because we have no clues, no data.

    You can’t prove something for which there is no data, no yardstick, and no way of developing either. Descartes’ cogito ergo sum may have breen brilliant, but Locke’s analysis of the basis of knowledge was more useful.

  15. one of the flaws in “i think, therefore I am” is it proves nothing – you think you think because you think you remember it. Memory is fuzzy and cannot be relied on.

    Plus, “God wouldn’t let an evil demon do this” isn’t a convincing argument either

  16. someone inside the system can never prove it one way or another

    and I’m ok with that 🙂

    I choose to believe there isn’t an elephant in the room when I close my eyes, and I choose to believe we aren’t in a matrix or the dream of an evil demon. If it turns out we are, eh, I did the best I could w/ the information at hand. Or, the demon dreams I did, either way 🙂

  17. someone inside the system can never prove it one way or another

    and I’m ok with that 🙂

    I choose to believe there isn’t an elephant in the room when I close my eyes, and I choose to believe we aren’t in a matrix or the dream of an evil demon. If it turns out we are, eh, I did the best I could w/ the information at hand. Or, the demon dreams I did, either way 🙂

  18. Oh deliver me from amatour philosophers….

    Ok bitches, here’s how it is, let’s start simple.

    I think the way you answer this begins with what you make of the question itself, and what grounds someone has to pose it and reject the answers you might give. One thing I would say is that the question is a set up from the start; someone is asking you to stipulate a system in which you can’t tell fake from real and then saying that you can’t tell fake from real. Well, duh. But on the other hand, the possibility that we are radically deceived is not completely outlandish. I’ve taught more classes than I can recall where I stood at the front of a room and tried to convince my students that they might be merely dreaming that they are sitting in a classroom, and as my reward for all this, I now have dreams in which I’m teaching philosophy classes. Karma’s sneaky that way. So we have to at least say that radical deception is at least a remote possibility for us. The question is, what should we do about it?

    Let’s call those who assert that the possibility that we are brains-in-vats, connected to supercomputers that stimulate the various nerve endings leading to various parts of our brains so that we feel like we’re experiencing a normal life, ‘skeptics’. To motivate this, you have to begin by making explicit an assumption that the skeptics trade on: knowledge requires certainty. That is, you might have beliefs of which you are kind of convinced, or even some that you have really good reason to believe, but no belief counts as knowledge unless you have absolute perfect certainty without even the most remote doubt or possibility of being wrong. The brain-in-a-vat case is there to tease out that intuition in you; you normally think of yourself as knowing ordinary things like “there’s a keyboard in front of me right now,” but you have to admit you can’t totally rule out the brain-in-a-vat possibility, so you’re not certain and therefore, you don’t know.

    You can think of responses to this falling into two large families. There are some who will agree that knowledge requires certainty, and therefore our job is to find some things that are certain that we can claim to know. Call this position “infallibilism.” On the other hand, there are those who would say that the skeptic is imposing an unreasonably high standard and we can claim to know a great many things; the challenge from here is just to articulate a reasonable standard or ways of fixing a standard that will tell us what counts as knowledge. Call this position “fallibilism.” The fallibilist is not saying that any old belief counts as knowledge, and the standard they suggest may still be quite high, but it will leave room for knowledge in cases in which there is at least some remote possibility that we are mistaken. So, normal cases in which I am not a brain in a vat and all my other beliefs and experiences tell me so are probably cases in which I know most common sense things. There are long, hard fights about whether one should be a fallibilist or an infallibilist. The infallibilist has a certain kind of intuition on their side. As soon as skeptics bring up the possibility of error, many people immediately feel a sense that they don’t know what they thought they did. This is the high standard of certainty coming to mind and playing a role for us, the skeptic will say. The fallibilist has a certain common sense intuition on their side as well, though. You negotiate evidence and possible sources of error all the time, and you intuitively come to a point of satisfaction in just about every case where you’re not talking to a skeptic. This suggests that skepticism itself is introducing something foreign to our thinking about knowledge. Infallibilism is a theory with a long history and a lot of deep problems. Fallibilism is much more popular today, but it has a host of problems, as well. I won’t try to tell you which one to buy into here, though I’ll admit that I’m a fallibilist. For now, let’s just talk about the different sub-families of the two.

    • Infallibilism
      The Granddaddy of all infallibilists is definitely Rene Descartes. He was writing on the cusp of a resurgence of skepticism in Europe, driven by a growing dissatisfaction with Catholic doctrinal authority, and he thought the only way to settle the question once and for all was to take on the toughest skeptics and respond with a model for knowledge that would make us absolutely certain about at least some of our beliefs. Being a mathematician by training (as in: Cartesian coordinates), his hero was Euclid, and he wanted the theory to look like Euclidian geometry. So he figured you should start with things that were absolutely certain (like axioms) and then deducing things that followed from those foundations with absolute logical necessity. That way, you start with certainty and every move away from the foundation guarantees that that certainty carries over to other beliefs. You just need something to get started with, and this is where you get Descartes giving modern philosophy’s best known quote, “I think, therefore I am.” Most people who mouth that off take it as a sort of pronouncement about the fact that you think, but it plays that sort of foundational role in Descartes’s view. Whatever else might be true or however I might be deceived, I know that I have thoughts and therefore, I know that I exist. I don’t yet know what kind of thing I am, of course. I could be a person with a body as common sense suggests, or I could be a brain in a vat, or I could be some sort of disembodied soul in a void, but whatever *I* am, *I* exist. So there’s something I can be certain of, an infallibilist would say. (There are some fallibilists who think knowledge has this kind of foundation/resting on the foundation structure, but most infallibilists do accept something like it.)

      The tricky bit is, the kinds of things I can say with that sort of certainty are pretty limited, so if you want to get back to knowing ordinary everyday things, you’re going to have to build on them. Trouble is, that’s where the skeptic has you pinned pretty well. Even if I am certain that I exist, all the experiences that I’m having or have had are just the sort of things that could happen to a brain in a vat at the hands of a mad scientist, so I can’t trust any of that. The sorts of things I do in logic or mathematics don’t depend on experience in those ways, but they’re things on which I can make mistakes and not realize it. So even with that base of certainty, you can’t go too far with it. I might very well be a brain in a vat, and I just can’t be certain that I’m not.

      Some people are content to swirl around in the circle of their own thoughts, and think that ther’s enough there to work with. Some philosophers have adopted a position called “phenomenalism,” which states that our common sense beliefs are actually a kind of code or shorthand for much larger sets of statements about the qualitative aspects of our experiences. So statements and beliefs about tables are shorthand for larger sets of statements about table-shaped and table-colored and table-textured and, I suppose, table-flavored sense data. The advantage there is that even if I don’t know that there is a table in front of me, I can say that the current state of my visual field includes some table-shaped regions of such and such color. Even if I’m a brain in a vat, I can be certain that those features of the experiences I have are actually there. So if I can just figure out some way of casting all my beliefs as really, deep down, being about that stuff that I’m so certain of, I’ll have most of my beliefs back. If this sounds weird, it’s fallen pretty squarely out of favor in the last fifty years or so, thanks in no small part to one of my philosophical heroes, Wilfrid Sellars. Not too many folks like this around, so moving on to fallibilism…

      • Fallibilism
        Fallibilism is a lot more heterogeneous than infallibilism. Infallibilism is kinda like Catholicisim, with one big monolithic structure, while fallibilism is like Protestantism with a million little sects. What they share is a sense that we should reject the skeptics’ challenge to our knowledge out of hand because the question doesn’t get any traction. You can do this a bunch of different ways:

        1. Contextualists argue that the word “know” (and all the other ways you might put this) can be applied with a variety of different standards, both very high and relatively moderate. What the skeptic does is compel us to accept a higher standard of evidence and justification for our knowledge, but in cases where skepticism isn’t brought up or isn’t appropriate, we naturally apply more lenient standards that we can normally meet. In other words, we talk about having knowledge a lot, and the skeptic comes along and says, “Yeah, but seriously, do we?” and we kind of concede the point. A lot of people don’t like this, because it sounds too wishy-washy, and I’ve argued that it presumes we could just pretend those skeptical objections never happened. It sort of rewards you for forgetting or not paying attention to those objections, and that seems suspect.

        2. Some folks prefer the view that our beliefs are knowledge any time they are both true and those beliefs arise through a reliable belief-forming process. So, for instance, primary color recognition is a pretty freakin’ reliable process. How many times have you looked at something red in normal lighting and formed the belief that it was red? How many times have you looked at something red in normal lighting and gotten it wrong? Probably not that many of the latter sort. The tricky part is describing just which belief-forming processes are the ones that we should focus on. As you might imagine, a lot of people who subscribe to this approach see the way of the future as a kind of philosophy-psychology hybrid, with better empirical research giving us better ideas of what to call knowledge. Critics object that this tries to cast all knowledge as a kind of cognitive reflex, and that an important part of what we mean by knowledge is its rational component – it stands up not just as an immediate response, but also upon more careful reflection.

        3. Some folks adopt a view called pragmatism. It’s not as simple as the ordinary sense of the word, which would suggest that you just take a moderate approach. Instead, it’s a view that says any analysis we give of any philosophical topic has to translate into terms that inform actions we can take. The bigger reasons for this stretch beyond the present topic, but part of how it applies is that they think truth and knowledge are matters of continuing success in dealing with the world. As long as something continues to function well, it has passed all the test it needs to pass, and any demand for certainty is just a demand for something that doesn’t add any practical advantage to what we can already do. Some people see this as a copout, but pragmatists would ask what use there is in having a concept and a standard (certainty) that we clearly can’t achieve and doesn’t do anything for us.

        So there are a bunch of answers you can give. To be quite honest, I don’t think there’s anything you can say that absolutely, completely rules out the possibility that we are just brains in vats, or that I am a brain in a vat or whatever. But I also tend to think that there’s really a need to answer that kind of objection. As I said, my sympathies are with the fallibilists and more specifically, the pragmatists. Nobody can tell you a way out of a situation in which you can’t trust any single piece of evidence available to you, but I would think knowledge is about what we can do with the evidence available to us and what we could accomplish with what we have. But that’s just my opinion. And really, what the fuck do I know?

        • You’ve just summed up my entire philosophical argument, as well as provided much more information than I could have in a much more articulate structure than I would have. Fascinating stuff- thank you.

          Really all I can add from a philosophical standpoint is that it seems to me that the infallibilist argument is paralyzed into impotence by its uncertainty, and therefore in the absence of some further breakthrough is of no particular value.

          As a pragmatist myself, I find that Occam’s Razor is sufficient as a general guiding principle, provided you don’t remain entirely closed to the possibility of new information requiring a revision of earlier assumptions.

          There’s also an engineering approach to the question, but I’ll post that separately.

  19. Oh deliver me from amatour philosophers….

    Ok bitches, here’s how it is, let’s start simple.

    I think the way you answer this begins with what you make of the question itself, and what grounds someone has to pose it and reject the answers you might give. One thing I would say is that the question is a set up from the start; someone is asking you to stipulate a system in which you can’t tell fake from real and then saying that you can’t tell fake from real. Well, duh. But on the other hand, the possibility that we are radically deceived is not completely outlandish. I’ve taught more classes than I can recall where I stood at the front of a room and tried to convince my students that they might be merely dreaming that they are sitting in a classroom, and as my reward for all this, I now have dreams in which I’m teaching philosophy classes. Karma’s sneaky that way. So we have to at least say that radical deception is at least a remote possibility for us. The question is, what should we do about it?

    Let’s call those who assert that the possibility that we are brains-in-vats, connected to supercomputers that stimulate the various nerve endings leading to various parts of our brains so that we feel like we’re experiencing a normal life, ‘skeptics’. To motivate this, you have to begin by making explicit an assumption that the skeptics trade on: knowledge requires certainty. That is, you might have beliefs of which you are kind of convinced, or even some that you have really good reason to believe, but no belief counts as knowledge unless you have absolute perfect certainty without even the most remote doubt or possibility of being wrong. The brain-in-a-vat case is there to tease out that intuition in you; you normally think of yourself as knowing ordinary things like “there’s a keyboard in front of me right now,” but you have to admit you can’t totally rule out the brain-in-a-vat possibility, so you’re not certain and therefore, you don’t know.

    You can think of responses to this falling into two large families. There are some who will agree that knowledge requires certainty, and therefore our job is to find some things that are certain that we can claim to know. Call this position “infallibilism.” On the other hand, there are those who would say that the skeptic is imposing an unreasonably high standard and we can claim to know a great many things; the challenge from here is just to articulate a reasonable standard or ways of fixing a standard that will tell us what counts as knowledge. Call this position “fallibilism.” The fallibilist is not saying that any old belief counts as knowledge, and the standard they suggest may still be quite high, but it will leave room for knowledge in cases in which there is at least some remote possibility that we are mistaken. So, normal cases in which I am not a brain in a vat and all my other beliefs and experiences tell me so are probably cases in which I know most common sense things. There are long, hard fights about whether one should be a fallibilist or an infallibilist. The infallibilist has a certain kind of intuition on their side. As soon as skeptics bring up the possibility of error, many people immediately feel a sense that they don’t know what they thought they did. This is the high standard of certainty coming to mind and playing a role for us, the skeptic will say. The fallibilist has a certain common sense intuition on their side as well, though. You negotiate evidence and possible sources of error all the time, and you intuitively come to a point of satisfaction in just about every case where you’re not talking to a skeptic. This suggests that skepticism itself is introducing something foreign to our thinking about knowledge. Infallibilism is a theory with a long history and a lot of deep problems. Fallibilism is much more popular today, but it has a host of problems, as well. I won’t try to tell you which one to buy into here, though I’ll admit that I’m a fallibilist. For now, let’s just talk about the different sub-families of the two.

  20. Infallibilism
    The Granddaddy of all infallibilists is definitely Rene Descartes. He was writing on the cusp of a resurgence of skepticism in Europe, driven by a growing dissatisfaction with Catholic doctrinal authority, and he thought the only way to settle the question once and for all was to take on the toughest skeptics and respond with a model for knowledge that would make us absolutely certain about at least some of our beliefs. Being a mathematician by training (as in: Cartesian coordinates), his hero was Euclid, and he wanted the theory to look like Euclidian geometry. So he figured you should start with things that were absolutely certain (like axioms) and then deducing things that followed from those foundations with absolute logical necessity. That way, you start with certainty and every move away from the foundation guarantees that that certainty carries over to other beliefs. You just need something to get started with, and this is where you get Descartes giving modern philosophy’s best known quote, “I think, therefore I am.” Most people who mouth that off take it as a sort of pronouncement about the fact that you think, but it plays that sort of foundational role in Descartes’s view. Whatever else might be true or however I might be deceived, I know that I have thoughts and therefore, I know that I exist. I don’t yet know what kind of thing I am, of course. I could be a person with a body as common sense suggests, or I could be a brain in a vat, or I could be some sort of disembodied soul in a void, but whatever *I* am, *I* exist. So there’s something I can be certain of, an infallibilist would say. (There are some fallibilists who think knowledge has this kind of foundation/resting on the foundation structure, but most infallibilists do accept something like it.)

    The tricky bit is, the kinds of things I can say with that sort of certainty are pretty limited, so if you want to get back to knowing ordinary everyday things, you’re going to have to build on them. Trouble is, that’s where the skeptic has you pinned pretty well. Even if I am certain that I exist, all the experiences that I’m having or have had are just the sort of things that could happen to a brain in a vat at the hands of a mad scientist, so I can’t trust any of that. The sorts of things I do in logic or mathematics don’t depend on experience in those ways, but they’re things on which I can make mistakes and not realize it. So even with that base of certainty, you can’t go too far with it. I might very well be a brain in a vat, and I just can’t be certain that I’m not.

    Some people are content to swirl around in the circle of their own thoughts, and think that ther’s enough there to work with. Some philosophers have adopted a position called “phenomenalism,” which states that our common sense beliefs are actually a kind of code or shorthand for much larger sets of statements about the qualitative aspects of our experiences. So statements and beliefs about tables are shorthand for larger sets of statements about table-shaped and table-colored and table-textured and, I suppose, table-flavored sense data. The advantage there is that even if I don’t know that there is a table in front of me, I can say that the current state of my visual field includes some table-shaped regions of such and such color. Even if I’m a brain in a vat, I can be certain that those features of the experiences I have are actually there. So if I can just figure out some way of casting all my beliefs as really, deep down, being about that stuff that I’m so certain of, I’ll have most of my beliefs back. If this sounds weird, it’s fallen pretty squarely out of favor in the last fifty years or so, thanks in no small part to one of my philosophical heroes, Wilfrid Sellars. Not too many folks like this around, so moving on to fallibilism…

  21. Fallibilism
    Fallibilism is a lot more heterogeneous than infallibilism. Infallibilism is kinda like Catholicisim, with one big monolithic structure, while fallibilism is like Protestantism with a million little sects. What they share is a sense that we should reject the skeptics’ challenge to our knowledge out of hand because the question doesn’t get any traction. You can do this a bunch of different ways:

    1. Contextualists argue that the word “know” (and all the other ways you might put this) can be applied with a variety of different standards, both very high and relatively moderate. What the skeptic does is compel us to accept a higher standard of evidence and justification for our knowledge, but in cases where skepticism isn’t brought up or isn’t appropriate, we naturally apply more lenient standards that we can normally meet. In other words, we talk about having knowledge a lot, and the skeptic comes along and says, “Yeah, but seriously, do we?” and we kind of concede the point. A lot of people don’t like this, because it sounds too wishy-washy, and I’ve argued that it presumes we could just pretend those skeptical objections never happened. It sort of rewards you for forgetting or not paying attention to those objections, and that seems suspect.

    2. Some folks prefer the view that our beliefs are knowledge any time they are both true and those beliefs arise through a reliable belief-forming process. So, for instance, primary color recognition is a pretty freakin’ reliable process. How many times have you looked at something red in normal lighting and formed the belief that it was red? How many times have you looked at something red in normal lighting and gotten it wrong? Probably not that many of the latter sort. The tricky part is describing just which belief-forming processes are the ones that we should focus on. As you might imagine, a lot of people who subscribe to this approach see the way of the future as a kind of philosophy-psychology hybrid, with better empirical research giving us better ideas of what to call knowledge. Critics object that this tries to cast all knowledge as a kind of cognitive reflex, and that an important part of what we mean by knowledge is its rational component – it stands up not just as an immediate response, but also upon more careful reflection.

    3. Some folks adopt a view called pragmatism. It’s not as simple as the ordinary sense of the word, which would suggest that you just take a moderate approach. Instead, it’s a view that says any analysis we give of any philosophical topic has to translate into terms that inform actions we can take. The bigger reasons for this stretch beyond the present topic, but part of how it applies is that they think truth and knowledge are matters of continuing success in dealing with the world. As long as something continues to function well, it has passed all the test it needs to pass, and any demand for certainty is just a demand for something that doesn’t add any practical advantage to what we can already do. Some people see this as a copout, but pragmatists would ask what use there is in having a concept and a standard (certainty) that we clearly can’t achieve and doesn’t do anything for us.

    So there are a bunch of answers you can give. To be quite honest, I don’t think there’s anything you can say that absolutely, completely rules out the possibility that we are just brains in vats, or that I am a brain in a vat or whatever. But I also tend to think that there’s really a need to answer that kind of objection. As I said, my sympathies are with the fallibilists and more specifically, the pragmatists. Nobody can tell you a way out of a situation in which you can’t trust any single piece of evidence available to you, but I would think knowledge is about what we can do with the evidence available to us and what we could accomplish with what we have. But that’s just my opinion. And really, what the fuck do I know?

  22. You’ve just summed up my entire philosophical argument, as well as provided much more information than I could have in a much more articulate structure than I would have. Fascinating stuff- thank you.

    Really all I can add from a philosophical standpoint is that it seems to me that the infallibilist argument is paralyzed into impotence by its uncertainty, and therefore in the absence of some further breakthrough is of no particular value.

    As a pragmatist myself, I find that Occam’s Razor is sufficient as a general guiding principle, provided you don’t remain entirely closed to the possibility of new information requiring a revision of earlier assumptions.

    There’s also an engineering approach to the question, but I’ll post that separately.

  23. Didn’t we discuss this in person last weekend, and didn’t you mention something about it being mathematically impossible to completely emulate a complex system with 100% accuracy? I could very well be mis-recalling our conversation.

    At any rate, since it is impossible to prove a negative, we can’t prove that we aren’t brains in vats. However, it should in theory be at least possible to prove that we are, if indeed that’s the case. If we aren’t brains in vats, we can never definitively establish this fact. Deal.

    Let’s throw philosophy to the side for a minute and approach this as an engineering problem. Here you are inside a complete simulation, attempting to determine its true nature. The only way to do this would be to force some manner of interaction with the real world. I see three ways of doing this:

    1) You can sit and wait for something outside the box to make a change to the environment that absolutely goes against the internal rules of the simulation, proving the existence of something external. This strongly resembles doing nothing, and therefore isn’t terribly interesting.

    2) You can attempt to discover an inherent inconsistency in the simulation- some property of the virtual environment that breaks the continuity of its own rules. This is a little more interesting. If it were as simple a matter as “every time I bang these two rocks together everyone around me sees ominous white glyphs against a blue background” it would be trivial, but that would indicate a very poor virtual environment indeed. Any system engineer worth his salt should be able to create a more robust system than that.

    No, the flaws would likely be more subtle. For example, if you found that the laws which govern matter on a quantum level and the laws which govern matter on a macro scale were mutually incompatible, this would lend strong evidence to the unreality of our existence. 😉

    3) You can try to actively force the simulation to be influenced by factors outside of it. This might not be as difficult as it sounds, particularly since the simulator itself is outside of the simulation. A simulator must by necessity be more complex than the simulation. Attempting to simulate the entire known universe (or at least the portions of it that we are perceiving at any given time) must be quite a chore for the simulator hardware. If we can sufficiently tax that hardware even further then we can effectively interact with the limitations of the hardware by producing measurable effects within the simulation.

    In the simulations/emulations with which we are already familiar pushing the hardware limits can produce noticeable effects within the simulation- dropped frames, artifacting, registers being counted faster than they’re updated or vice versa, etc. We can try this in our current environment and see if we get any odd effects. For example, if we were in a simulation we might expect to find that when we accelerate an object to near the limits of the simulation the “dropped frames” effect might manifest itself in the form of the the simulator being unable to update the object’s internal reference at the normal rate, such that time would appear to pass more slowly for the object than for a stationary observer. The simulator might not be able to update its outside visual reference fast enough either, causing the object to appear to contract from the point of view of a stationary observer. Furthermore, in each clock cycle, the registers that record the location of the object’s mass might be updated for the new location before they’re cleared for the old location. Due to bits of the mass being counted multiple times, the closer the object got to the simulation’s hard velocity limit the more mass it would appear to have to an outside observer.

    This would probably be a consistent, measurable phenomenon. The equation for this mass multi-counting (let’s call it “mass dilation”) could be something along the lines of this:

    observed_mass = actual_mass / 1 – sqr root ((velocity^2)/(c^2)), where “c” is whatever the velocity hard limit of your simulation is.

    Hypothetically, of course. 😉

  24. Didn’t we discuss this in person last weekend, and didn’t you mention something about it being mathematically impossible to completely emulate a complex system with 100% accuracy? I could very well be mis-recalling our conversation.

    At any rate, since it is impossible to prove a negative, we can’t prove that we aren’t brains in vats. However, it should in theory be at least possible to prove that we are, if indeed that’s the case. If we aren’t brains in vats, we can never definitively establish this fact. Deal.

    Let’s throw philosophy to the side for a minute and approach this as an engineering problem. Here you are inside a complete simulation, attempting to determine its true nature. The only way to do this would be to force some manner of interaction with the real world. I see three ways of doing this:

    1) You can sit and wait for something outside the box to make a change to the environment that absolutely goes against the internal rules of the simulation, proving the existence of something external. This strongly resembles doing nothing, and therefore isn’t terribly interesting.

    2) You can attempt to discover an inherent inconsistency in the simulation- some property of the virtual environment that breaks the continuity of its own rules. This is a little more interesting. If it were as simple a matter as “every time I bang these two rocks together everyone around me sees ominous white glyphs against a blue background” it would be trivial, but that would indicate a very poor virtual environment indeed. Any system engineer worth his salt should be able to create a more robust system than that.

    No, the flaws would likely be more subtle. For example, if you found that the laws which govern matter on a quantum level and the laws which govern matter on a macro scale were mutually incompatible, this would lend strong evidence to the unreality of our existence. 😉

    3) You can try to actively force the simulation to be influenced by factors outside of it. This might not be as difficult as it sounds, particularly since the simulator itself is outside of the simulation. A simulator must by necessity be more complex than the simulation. Attempting to simulate the entire known universe (or at least the portions of it that we are perceiving at any given time) must be quite a chore for the simulator hardware. If we can sufficiently tax that hardware even further then we can effectively interact with the limitations of the hardware by producing measurable effects within the simulation.

    In the simulations/emulations with which we are already familiar pushing the hardware limits can produce noticeable effects within the simulation- dropped frames, artifacting, registers being counted faster than they’re updated or vice versa, etc. We can try this in our current environment and see if we get any odd effects. For example, if we were in a simulation we might expect to find that when we accelerate an object to near the limits of the simulation the “dropped frames” effect might manifest itself in the form of the the simulator being unable to update the object’s internal reference at the normal rate, such that time would appear to pass more slowly for the object than for a stationary observer. The simulator might not be able to update its outside visual reference fast enough either, causing the object to appear to contract from the point of view of a stationary observer. Furthermore, in each clock cycle, the registers that record the location of the object’s mass might be updated for the new location before they’re cleared for the old location. Due to bits of the mass being counted multiple times, the closer the object got to the simulation’s hard velocity limit the more mass it would appear to have to an outside observer.

    This would probably be a consistent, measurable phenomenon. The equation for this mass multi-counting (let’s call it “mass dilation”) could be something along the lines of this:

    observed_mass = actual_mass / 1 – sqr root ((velocity^2)/(c^2)), where “c” is whatever the velocity hard limit of your simulation is.

    Hypothetically, of course. 😉

  25. We would only be able to tell if the simulation was in some way flawed. And even then, we would have to be aware of that flaw. For example, what if the flaw was that ducking into a particular phone booth teleported you elsewhere… You would have to be aware that it was an error in the simulation, and not real. Otherwise, it would seem like you had just stumbled upon some top secret device that is “real”.

  26. We would only be able to tell if the simulation was in some way flawed. And even then, we would have to be aware of that flaw. For example, what if the flaw was that ducking into a particular phone booth teleported you elsewhere… You would have to be aware that it was an error in the simulation, and not real. Otherwise, it would seem like you had just stumbled upon some top secret device that is “real”.

  27. Not having read all of the comments above, my thoughts are:

    A simulation isn’t perfect, by definition. If it were perfect, it would *be* the real thing.

    So if you could find and demonstrate some contradictions in the simulation, you could show that *something* weird is up, from within the simulation.

    Although on second thought, if you found some apparent contradictions, it might be hard to show that you didn’t just make a mistake. And even if people double checked you until everyone was 100% sure, here’s a weird anomaly, then so long as you have no way of getting outside the simulation, it’s hard to be sure that the world doesn’t just happen to be that way.

  28. Not having read all of the comments above, my thoughts are:

    A simulation isn’t perfect, by definition. If it were perfect, it would *be* the real thing.

    So if you could find and demonstrate some contradictions in the simulation, you could show that *something* weird is up, from within the simulation.

    Although on second thought, if you found some apparent contradictions, it might be hard to show that you didn’t just make a mistake. And even if people double checked you until everyone was 100% sure, here’s a weird anomaly, then so long as you have no way of getting outside the simulation, it’s hard to be sure that the world doesn’t just happen to be that way.

  29. There’s this delightful movie called The Thirteenth Floor that unfortunately not a lot of people have seen because it came out shortly after The Matrix; it demonstrates the if-it’s-a-simulation-it-has-a-weakpoint stance. I recommend it to anyone and everyone involved in this topic.

    There’s also this delightful mathematical principle called Gödel’s Incompleteness Theorem that states no system can prove its own consistency, so it’s hopeless to determine the reality factor of one’s existence without external prodding. Which, incidentally, is exactly what happens – unintentionally – in The Thirteenth Floor. It’s a clever little flick. – ZM

  30. There’s this delightful movie called The Thirteenth Floor that unfortunately not a lot of people have seen because it came out shortly after The Matrix; it demonstrates the if-it’s-a-simulation-it-has-a-weakpoint stance. I recommend it to anyone and everyone involved in this topic.

    There’s also this delightful mathematical principle called Gödel’s Incompleteness Theorem that states no system can prove its own consistency, so it’s hopeless to determine the reality factor of one’s existence without external prodding. Which, incidentally, is exactly what happens – unintentionally – in The Thirteenth Floor. It’s a clever little flick. – ZM

  31. Try

    My thoughts on this seem to gravitate to the phenomenon of the “Sweet spot in time”. This is the brief flash of total rightness one gets when attempting to accomplish something difficult. The classic example being hitting a baseball, that moment you get just before and as the bat hits the ball and you know that “it’s outta here”

    There should be no way to know that, no way to reconcile and account for all of the uncertainties inherent in phyics. Add to that the fact that it doesn’t happen every time you accomplish the difficult feat, it is an occasional occurence at best. Something important is going on here.

    So this is a prime candidate for the “glitch in the simulation” which proves the existance of the simulation… BUT, the creators of this simulation are far to close to infallible to allow something like this to screw up the simulation’s convincingness. At the very least, if unable to clear up this occasional break in reality they would institute a work around, make the “sweet spot” happen every single time you knock the ball over the wall, thereby making it unremarkable. The fact that they haven’t done so means they don’t exist. Without the existance of the creators, the simulation can not exist, therefore… it’s all real baby!

  32. Try

    My thoughts on this seem to gravitate to the phenomenon of the “Sweet spot in time”. This is the brief flash of total rightness one gets when attempting to accomplish something difficult. The classic example being hitting a baseball, that moment you get just before and as the bat hits the ball and you know that “it’s outta here”

    There should be no way to know that, no way to reconcile and account for all of the uncertainties inherent in phyics. Add to that the fact that it doesn’t happen every time you accomplish the difficult feat, it is an occasional occurence at best. Something important is going on here.

    So this is a prime candidate for the “glitch in the simulation” which proves the existance of the simulation… BUT, the creators of this simulation are far to close to infallible to allow something like this to screw up the simulation’s convincingness. At the very least, if unable to clear up this occasional break in reality they would institute a work around, make the “sweet spot” happen every single time you knock the ball over the wall, thereby making it unremarkable. The fact that they haven’t done so means they don’t exist. Without the existance of the creators, the simulation can not exist, therefore… it’s all real baby!

  33. I think the question you have to ask yourself, is not are we in the matrix but why we are in the matrix…

    It might be of use as a power source
    but more likely because of the idea’s we have are somehow used or needed.

    For example the race that are using us, perhaps they have no art, no ability to create it only they love to view it, could they not be using us to make for them what they can not make…

    As far as rules go, and wanting to brake out of the matrix, by changing or bending the rules of the known ordered universe.

    Last I looked the name for that was Magic.

    firelord

  34. I think the question you have to ask yourself, is not are we in the matrix but why we are in the matrix…

    It might be of use as a power source
    but more likely because of the idea’s we have are somehow used or needed.

    For example the race that are using us, perhaps they have no art, no ability to create it only they love to view it, could they not be using us to make for them what they can not make…

    As far as rules go, and wanting to brake out of the matrix, by changing or bending the rules of the known ordered universe.

    Last I looked the name for that was Magic.

    firelord

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.