Why we believe what we believe, and why that makes us gullible

Just how deep do you believe?
Will you bite the hand that feeds?
Will you chew until it bleeds?
Can you get up off your knees?
Are you brave enough to see?
Do you want to change it?

What is the purpose of the human brain? What function does it serve? Be careful; this is a trick question!

If you say “The brain is an organ of thought” or “The brain is an instrument of knowledge” or “The brain is the way we understand the world,” that’s the wrong answer. The correct answer is that the brain is an organ of survival. We have these big brains because they enabled our ancestors to survive; in that sense, they are no different from claws or fur or fangs.

And like all organs of survival, the brain was shaped by natural selection, sculpted by evolutionary pressures that favored the traits that helped our ancestors survive. The big brains we have now were molded and shaped to one purpose: to help small bands of hunter-gatherers survive.


Back in the day, when we rarely lived longer than 20 or 25 years and starvation battled with predation by other large carnivores for the number one spot in “things that killed human beings,” our brains gave us a competitive advantage. They did this in part by acting as engines of belief, allowing us to form models of the world and create beliefs about the world that gave us an advantage.

For example, an early human who observed that if he was upwind of his prey, the prey got away, but if he was downwind of his prey, he could more easily kill it formed a belief: “Staying downwind from the prey makes it more likely that the prey will not escape.”

Of course, other animals know these things instinctively. But the advantage of our big monkey brains is that we do not have to rely on instinct; we can form beliefs on the fly, as we go along, which means we can function in environments our instincts are not prepared to deal with. The brain as an organ of survival allows us to make observations and draw beliefs from these observations, and these beliefs give us a competitive advantage.


These beliefs can be immediate and concrete, such as “If I stick my hand in the fire, it will hurt.” They can make predictions about the future, such as “The sun will rise tomorrow” or “If the days grow longer and the weather grows colder, then winter is coming, and food is about to become less plentiful.” A belief can be negative, such as “If I leap from the top of this tree, I will not be able to fly.”

Having a brain optimized for forming beliefs is important if forming beliefs your survival schtick. If you think of the brain as a belief engine, which can either believe something or disbelieve it, and if you think of a particular belief as being true or false, it is easy to construct a game theory matrix describing all the possibilities, with two success modes and two failure modes:

Ideally, our brains lead us to believe things that are true, such as “A large leopard is a dangerous adversary,” and to disbelieve things that are not true, such as “I can eat rocks.” But there are two failure conditions as well: rejecting beliefs that are true, and accepting beliefs that are not.


The failure conditions have survival implications. Believing untrue things and not believing true things can both lead to disaster.

Of the two, though, believing untrue things will, in a small group of hunter-gatherers, usually cause fewer problems than not believing true things. Believing that dancing in circles three times and carrying a magic stone around with you will increase the chances of a successful hunt doesn’t really hurt anything; not believing that staying downwind from your prey is important has a significant survival penalty attached to it.

There’s a strong survival imperative, in other words, to prefer failure by believing something untrue over failure by not believing something that is true. Believing is less expensive than not believing. If a primitive hunter-gatherer eats an unfamiliar food, then becomes sick, it might not be the food that caused him to get sick–but if he believes the food makes him sick, and he’s wrong, the consequences are not too great, whereas if he does not believe the food made him sick,a nd he’s wrong, the consequences can be deadly. The guy who ate some food, got sick, and believed the food made him sick is the guy who survived; today, his descendants give their kids a measles vaccination, and when coincidentally their kids are diagnosed with autism, believe that the measles vaccination caused the autism.

From a survival standpoint, the consequences of not believing something true are worse than the consequences of believing something that is not true. Natural selection, therefore, tends to select in favor of people whose default state is to believe something rather than in favor of people whose default state is to disbelieve something.

And to confound matters further, humans are social animals. In our earliest days, when our social groups tended to number fifty or a hundred people and leopards were a serious and ongoing threat, to live alone was a death sentence. We depended on the support of others to survive.

But that support had a price. Groups, like individuals, form beliefs. To reject the beliefs of your group was to risk ostracism and death. People who questioned and challenged the beliefs of their tribe often did not survive to pass on their genes to future generations; the ones that were most likely to pass along their genes were the ones who learned to believe what the group believed, even if it was contradicted by clear and available evidence.

And those who were adept at manipulating the belief engines of others–shamans, tribal rulers who convinced others of their divine right to rule–tended to be disproportionately successful at mating and tended to control a disproportionate amount of resources, meaning they tended to pass on their genes most successfully.


The greatest invention of the human mind is not fire, or agriculture, or iron, or the steam engine, or even the splitting of the atom. From the perspective of understanding the physical world, the greatest invention of the human mind is the scientific method–the systematic, skeptical approach to claims about the way the world works.

When a scientist has an idea, he does not believe it, and he does not seek to prove it. Instead, he approaches it skeptically, and he seeks to disprove it. The more the idea resists increasingly sophisticated and vigorous attempts to disprove it, the more faith he begins to put in it. This is why any idea that is not falsifiable is not science.

A correlary of this idea is the notion that physical reality behaves the same way everywhere, for everyone. If a brick falls when it is dropped in Kansas, it also falls when it is dropped in Salt Lake City–and, importantly, it falls no matter who drops it, whether the person who drops it believes that it will fall or not. The physical world does not change itself to conform to human wishes and expectations. A claim that is made about some process that must be believed in order to be seen, such as ESP, is not science.


But skepticism is not innate. It is learned. The human brain has been shaped by natural selection not to be skeptical. It has been shaped by evolutionary pressure into a belief engine that believes things more easily than it disbelieves things. For our ancestors, the penalty for skepticism was very high; those early hominids for whom skepticism came naturally did not live long enough to pass on their genes to us. Our brains evolved to be gullible, not skeptical.


Today, we live in a cognitive and physical environment very different from that of our ancestors. But the machinery of natural selection is slow.

In the modern world, the same four states of our belief engines still apply. We are still predisposed to believe things rather than disbelieve them; and we can still believe things that are true, disbelieve things that are true, believe things that aren’t true, or disbelieve things that aren’t true:

Believing things that are true
  • Eating uncooked pork can make you sick
  • If you do not feed your pet dog, your dog will become unhappy, and eventually will die
  • Provoking a large predator may have serious consequences
  • Falling from a great height may have serious consequences
  • A speeding car can not stop instantly
Believing things that are not true
  • A pill can make your penis grow bigger
  • There is a sea monster living in a small landlocked lake in Scotland
  • Atlantis was a lost continent possessed of fabulous technology
  • Space aliens abduct people and perform experiments on them
  • Republicans favor small government; Democrats favor big government
  • There is an invisible man living in the sky who will spank you if you have sex in the wrong position
Not believing things that are true
  • The Holocaust never happened
  • Vaccination does not protect from disease
  • NASA never went to the moon
  • Evolutionary processes did not created the variety of life we can observe on this planet
  • Viruses and bacteria do not cause disease
  • The world is not more than six thousand years old
  • Americans are not obligated to pay income tax
Not believing things that are untrue
  • The world is not flat
  • You can not fly no matter how fast you flap your arms
  • There is no jolly fat man at the North Pole who hands out gifts
  • Money does not grow on trees
  • Forwarding an email to all your friends will not get Bill Gates to give you money
  • Solar eclipses are not caused by gigantic maurading dragons swallowing the sun

What does this mean in practical terms? Simple. It means that your brain has been hard-wired over hundreds of thousands of years of natural selection to make you credulous. Look at the brain as an instrument of survival, look at natural selection creating pressures to prefer the failure mode of believing that which isn’t true over the failure mode of not believing that which is true, and you end up with people hard-wired from the ground up to be gullible.

Your brain is a tool of survival that works by acting as an engine for creating beliefs. When you form a belief, you get a little squirt of pleasure that lights up the reward circuit of your brain. You’re emotionally rewarded every time you believe something.

At the same time, skepticism, and rational, analytical thought, do not come naturally. They’re not what your brain was optimized for; because of that, they are skills which must be learned, and are not innate. In fact, they feel unnatural and uncomfortable to you. Your brain gives you a reward for accepting beliefs, not for challenging them.


There is good news, however. When you introduce sapience into the mix, things change. Biology is not destiny. Your brain is optimized to make you gullible, but you do not need to be. You can train yourself to recognize that little squirt of pleasure you get when you believe something for what it is–a biological holdover from a time when adopting beliefs quickly and without skepticism had survival advantage. You can train yourself to be skeptical, even though it’s not natural for you.

And the rewards for doing so are great. In a modern world, where people want you to believe that they will transfer THE SUM OF $25,000,000 (TWO HUNDRED FIFTY MILLION US$) into your bank account from Nigeria if you give them your bank account information, where emails tell you that you need to update your credit card information or PayPal will shut you down, where people tell you that viruses and bacteria don’t cause disease and if you just order magic “balancing powder” ($360 for a 6-month supply) from their Web site you’ll never get sick, credulity is a survival disadvantage, and skepticism an advantage.

But it doesn’t come naturally. You have to work at it.

38 thoughts on “Why we believe what we believe, and why that makes us gullible

  1. Could you do me a favor and come down to my work and bitch-slap a few folks with your well-spoken-ness?

    They’re tired of listening to me. For some reason.

    Of course, I never brought graphics . . . .

  2. Could you do me a favor and come down to my work and bitch-slap a few folks with your well-spoken-ness?

    They’re tired of listening to me. For some reason.

    Of course, I never brought graphics . . . .

  3. I so want to print this out and give it to a co-worker (she who has asked me several times if it isn’t possible that UK Lottery email might just be true ::headdesk::)

  4. I so want to print this out and give it to a co-worker (she who has asked me several times if it isn’t possible that UK Lottery email might just be true ::headdesk::)

  5. Post Hoc ergo Propter Hoc

    It happened after, therefore, it happened because….

    This is not a non-sequiter, an invalid argument.
    The problem is that it has a strong survival value.
    If I eat those little green round thingies from that tree
    and two hours later my stomach hurts, maybe I shouldn’t eat those little green round thingies!

    Nice essay.

  6. Post Hoc ergo Propter Hoc

    It happened after, therefore, it happened because….

    This is not a non-sequiter, an invalid argument.
    The problem is that it has a strong survival value.
    If I eat those little green round thingies from that tree
    and two hours later my stomach hurts, maybe I shouldn’t eat those little green round thingies!

    Nice essay.

  7. This is quite possibly the cleverest essay I’ve yet seen you write, Franklin, and that’s saying quite a lot indeed. I’d never considered this perspective before, and it’s capable of explaining SO VERY much about the way society works. I feel this runs truly and frighteningly deep. – ZM

  8. This is quite possibly the cleverest essay I’ve yet seen you write, Franklin, and that’s saying quite a lot indeed. I’d never considered this perspective before, and it’s capable of explaining SO VERY much about the way society works. I feel this runs truly and frighteningly deep. – ZM

  9. I really enjoyed your essay. Thanks for sharing it. The part about scientific theories brought up somethig I had been thinking about recently …

    “When a scientist has an idea, he does not believe it, and he does not seek to prove it. Instead, he approaches it skeptically, and he seeks to disprove it.”

    Something I had never realized before is that we are taught “Scientific Facts” in school. Things we are supposed to believe are true because scientists have proven them. But, there are no “Scientific Facts” (eventhough we have heard of things referred to in that manner) … They have mearly proven NOT that they are CORRECT in their theroies, but that they are most likely the LEAST WRONG. Made me think about science in a whole new way. I had always thought of it as more concrete, but it isn’t at all. The realization that science is far less rigid than I thought greatly appeals to me lately.

  10. I really enjoyed your essay. Thanks for sharing it. The part about scientific theories brought up somethig I had been thinking about recently …

    “When a scientist has an idea, he does not believe it, and he does not seek to prove it. Instead, he approaches it skeptically, and he seeks to disprove it.”

    Something I had never realized before is that we are taught “Scientific Facts” in school. Things we are supposed to believe are true because scientists have proven them. But, there are no “Scientific Facts” (eventhough we have heard of things referred to in that manner) … They have mearly proven NOT that they are CORRECT in their theroies, but that they are most likely the LEAST WRONG. Made me think about science in a whole new way. I had always thought of it as more concrete, but it isn’t at all. The realization that science is far less rigid than I thought greatly appeals to me lately.

  11. Interesting essay, but…

    …not without its own blind spots. For example, under “Believing Things That Are Not True,” I didn’t see, “There is and should be no difference between an illegal alien and someone who immigrated legally to the U.S.” Or, “Communism could really work if only the right people tried it.”

    Conversely, under “Not Believing Things That Are True,” I didn’t see, “The U.S. Constitution does not guarantee an individual’s right to bear arms, but merely provides for citizen-run militia.”

    “Reality has a liberal bias” is a nice soundbyte, but hardly the truth 100% of the time.

  12. Interesting essay, but…

    …not without its own blind spots. For example, under “Believing Things That Are Not True,” I didn’t see, “There is and should be no difference between an illegal alien and someone who immigrated legally to the U.S.” Or, “Communism could really work if only the right people tried it.”

    Conversely, under “Not Believing Things That Are True,” I didn’t see, “The U.S. Constitution does not guarantee an individual’s right to bear arms, but merely provides for citizen-run militia.”

    “Reality has a liberal bias” is a nice soundbyte, but hardly the truth 100% of the time.

  13. *applauds*

    When you form a belief, you get a little squirt of pleasure that lights up the reward circuit of your brain. You’re emotionally rewarded every time you believe something.

    Interesting. Are there studies supporting this?

  14. *applauds*

    When you form a belief, you get a little squirt of pleasure that lights up the reward circuit of your brain. You’re emotionally rewarded every time you believe something.

    Interesting. Are there studies supporting this?

  15. I’ve shown this post to my partner, and he and I have been going ’round and ’round about a few things you’ve said… so I have some questions for you.

    1. How do you define “truth”? Is it “things that exist” versus “things that do not exist”?

    1a. If you define it that way, how do you define “exist”?

    1b. If you define it some other way, how are you defining “truth”?

    The reason I ask this is because “truth” needs to be defined so that the axioms you’re putting forward can be tested and falsified. Otherwise, your discussion becomes a philosophical thing, not a scientific one.

    2. You seem to have come to two conclusions here: first, that each “failure state” will lead to non-survival, and second, that one “failure state” (disbelieving true things) has a higher risk of non-survival than the other. Can you defend that proposition with falsification and corroboration?

    3. You seem to be conflating “survival of the individual by himself” with “survival of the individual as a member of a group.” Your explanation seems to say that groups which disbelieve true things are also more likely to die off than groups which believe untrue things – that the risk for the group is the same as the risk for the individual. I don’t see how your logic progresses here. Belief in untrue things is unarguably less dangerous for the individual, when he’s trying to remain a member of a group that also believes those untrue things, than disbelief in true things is, but belief in untrue things for a group may be just as dangerous as disbelief in true things is for that group.

    4. This example leads to my next problem with your argument. If you base the model on “True/Untrue,” any disbelief in a true statement can be reformatted to a belief in an untrue statement by a simple semantics shift. For example:

    – Bob the Hunter-Gatherer thinks: “The leopards are upwind of me.” They are downwind. Bob is believing an untrue thing.

    Or,

    – Bob the Hunter-Gatherer thinks: “The leopards are not downwind of me.” They are downwind. Bob is disbelieving a true thing.

    The result is the same – Bob is leopard lunch either way. But the “failure conditions” are opposite. Depending on how you phrase it, you can put the failure condition in either box.

    However, if we change the model to “Things that exist” (instead of “Things that are true”) and “Things that do not exist” (insead of “Things that are untrue”) it begins to work better:

    – Bob the Hunter-Gatherer thinks: “The leopards are upwind of me.” They are downwind, so the condition he is believing in does not exist. He is believing something that does not exist.

    Or,

    – Bob the Hunter-Gatherer thinks: “The leopards are not downwind of me.” But they are downwind, so the condition he is believing in still does not exist. He is still believing something that does not exist. The semantic shift does not change the outcome – and places the belief in the same category (belief in things that do not exist), regardless of how it’s phrased.

    I am very interested in your thoughts on these questions. I, too, am interested in why gullibility is so prevalent compared to skepticism in human beings, and if your model can be made to work, it would be a great jumping-off point for research on the topic.

    • I’ve shown this post to my partner, and he and I have been going ’round and ’round about a few things you’ve said… so I have some questions for you.

      1. How do you define “truth”? Is it “things that exist” versus “things that do not exist”?

      For the purpose of this essay, something is “true” if it accurately models the physical world and/or makes predictions about the physical world that are accurate. Examples of “true” statements would be “a leopard can and will attack and eat a human,” “standing upwind of prey animals may alert them to one’s presence and therefore spoil a hunt,” “rocks are inedible,” “fire is dangerous because it can cause significant injury or death to a human,” and so on. Examples of false statements include “a person can fly by flapping his arms,” “drinking soy milk will turn heterosexual men into homosexual men,” and the holocaust never happened.”

      I personally put statements like “believing in Jesus will grant immortality to the believer” in the latter category, but that’s a whole ‘nother post altogether… 🙂

      2. You seem to have come to two conclusions here: first, that each “failure state” will lead to non-survival, and second, that one “failure state” (disbelieving true things) has a higher risk of non-survival than the other. Can you defend that proposition with falsification and corroboration?

      I think that statement can be defended rigorously, though I won’t claim to have done any studies to defend it. Such studies would, I think, be relatively simple to put together, however.

      3. You seem to be conflating “survival of the individual by himself” with “survival of the individual as a member of a group.” Your explanation seems to say that groups which disbelieve true things are also more likely to die off than groups which believe untrue things – that the risk for the group is the same as the risk for the individual. I don’t see how your logic progresses here.

      Ah. Looking back on what I wrote, I can see how you reached that conclusion, but that’s not what I’m saying at all.

      What I’m saying is that survival of the individual by himself presents pressure to accept rather than reject beliefs, and that social phenomena can increase that pressure. I don’t believe that groups that disbelieve true things are also more likely to die off than groups which believe untrue things. What I believe is that survival of the individual in a hunter-gather society depends to a large degree on membership in a group, and part of the cost of membership in a group is accepting the group’s beliefs. An individual who rejects the group’s beliefs is subject to sanction, which historically can mean anything from expulsion from the group to death by stoning. An individual who wishes to remain in a group is best served if he accepts the beliefs of the group.

      4. This example leads to my next problem with your argument. If you base the model on “True/Untrue,” any disbelief in a true statement can be reformatted to a belief in an untrue statement by a simple semantics shift. For example: […]

      What I’m primarily concerned about is the formation of new beliefs about the processes of the physical world. Our brains attempt to make sense of the physical world by forming models of its behavior–beliefs–which can be used to categorize and make predictions about the physical world.

      When we are presented with a set of statements about the physical world, such as “drinking soy milk can turn straight men into gay men” or “staying downwind of a leopard reduces the odds of being leopard lunch,” I think that adaptive pressures have favored the formation of a brain that accepts these statements over a brain that tends to be critical of these statements. It’s easier for us to accept statements and ideas about the physical world than it is for us to analyze them critically. As a result, we as a species have adopted uncritically vast bodies of statements and models concerning the physical world, including belief systems ranging from astrology to psychic healing, and continue to hold them despite overwhelming evidence that they’re bunk.

      • For 1 through 3, you’ve answered to my and my partner’s satisfaction.

        I still have questions about 4, though:

        Your model still makes it possible to say “X is true” when it’s not or “Not-X is not true” when it is, and have the belief fall into two different failure categories while still saying the same thing. What I’m trying to get at is that your model claims a bias towards “belief in not-true things” over “disbelief in true things,” but if you can rephrase a belief in a not-true thing to be a disbelief in a true thing, how can you support that bias? (“Bias” not meant as “you’re biased” but as “your model claims a tendency towards A over a tendency towards B.”)

        For example: The “true” statement is: “The leopards are downwind of me.”

        The “belief in not-true” can be phrased as: “The leopards are upwind of me.”

        The “disbelief in true” can be phrased as “The leopards are not downwind of me.”

        The result is the same.

        This semantic problem is what I’m trying to work out, so that all your claims are supported. Right now the one I’m falling down on is the preference for belief in untrue over non-belief in true. To me they look equally likely, depending on how you phrase the statement.

        What I’m hoping to see is that statements of belief in untrue are, for you, NOT semantically interchangeable with statements of disbelief in true.

  16. I’ve shown this post to my partner, and he and I have been going ’round and ’round about a few things you’ve said… so I have some questions for you.

    1. How do you define “truth”? Is it “things that exist” versus “things that do not exist”?

    1a. If you define it that way, how do you define “exist”?

    1b. If you define it some other way, how are you defining “truth”?

    The reason I ask this is because “truth” needs to be defined so that the axioms you’re putting forward can be tested and falsified. Otherwise, your discussion becomes a philosophical thing, not a scientific one.

    2. You seem to have come to two conclusions here: first, that each “failure state” will lead to non-survival, and second, that one “failure state” (disbelieving true things) has a higher risk of non-survival than the other. Can you defend that proposition with falsification and corroboration?

    3. You seem to be conflating “survival of the individual by himself” with “survival of the individual as a member of a group.” Your explanation seems to say that groups which disbelieve true things are also more likely to die off than groups which believe untrue things – that the risk for the group is the same as the risk for the individual. I don’t see how your logic progresses here. Belief in untrue things is unarguably less dangerous for the individual, when he’s trying to remain a member of a group that also believes those untrue things, than disbelief in true things is, but belief in untrue things for a group may be just as dangerous as disbelief in true things is for that group.

    4. This example leads to my next problem with your argument. If you base the model on “True/Untrue,” any disbelief in a true statement can be reformatted to a belief in an untrue statement by a simple semantics shift. For example:

    – Bob the Hunter-Gatherer thinks: “The leopards are upwind of me.” They are downwind. Bob is believing an untrue thing.

    Or,

    – Bob the Hunter-Gatherer thinks: “The leopards are not downwind of me.” They are downwind. Bob is disbelieving a true thing.

    The result is the same – Bob is leopard lunch either way. But the “failure conditions” are opposite. Depending on how you phrase it, you can put the failure condition in either box.

    However, if we change the model to “Things that exist” (instead of “Things that are true”) and “Things that do not exist” (insead of “Things that are untrue”) it begins to work better:

    – Bob the Hunter-Gatherer thinks: “The leopards are upwind of me.” They are downwind, so the condition he is believing in does not exist. He is believing something that does not exist.

    Or,

    – Bob the Hunter-Gatherer thinks: “The leopards are not downwind of me.” But they are downwind, so the condition he is believing in still does not exist. He is still believing something that does not exist. The semantic shift does not change the outcome – and places the belief in the same category (belief in things that do not exist), regardless of how it’s phrased.

    I am very interested in your thoughts on these questions. I, too, am interested in why gullibility is so prevalent compared to skepticism in human beings, and if your model can be made to work, it would be a great jumping-off point for research on the topic.

  17. I’ve shown this post to my partner, and he and I have been going ’round and ’round about a few things you’ve said… so I have some questions for you.

    1. How do you define “truth”? Is it “things that exist” versus “things that do not exist”?

    For the purpose of this essay, something is “true” if it accurately models the physical world and/or makes predictions about the physical world that are accurate. Examples of “true” statements would be “a leopard can and will attack and eat a human,” “standing upwind of prey animals may alert them to one’s presence and therefore spoil a hunt,” “rocks are inedible,” “fire is dangerous because it can cause significant injury or death to a human,” and so on. Examples of false statements include “a person can fly by flapping his arms,” “drinking soy milk will turn heterosexual men into homosexual men,” and the holocaust never happened.”

    I personally put statements like “believing in Jesus will grant immortality to the believer” in the latter category, but that’s a whole ‘nother post altogether… 🙂

    2. You seem to have come to two conclusions here: first, that each “failure state” will lead to non-survival, and second, that one “failure state” (disbelieving true things) has a higher risk of non-survival than the other. Can you defend that proposition with falsification and corroboration?

    I think that statement can be defended rigorously, though I won’t claim to have done any studies to defend it. Such studies would, I think, be relatively simple to put together, however.

    3. You seem to be conflating “survival of the individual by himself” with “survival of the individual as a member of a group.” Your explanation seems to say that groups which disbelieve true things are also more likely to die off than groups which believe untrue things – that the risk for the group is the same as the risk for the individual. I don’t see how your logic progresses here.

    Ah. Looking back on what I wrote, I can see how you reached that conclusion, but that’s not what I’m saying at all.

    What I’m saying is that survival of the individual by himself presents pressure to accept rather than reject beliefs, and that social phenomena can increase that pressure. I don’t believe that groups that disbelieve true things are also more likely to die off than groups which believe untrue things. What I believe is that survival of the individual in a hunter-gather society depends to a large degree on membership in a group, and part of the cost of membership in a group is accepting the group’s beliefs. An individual who rejects the group’s beliefs is subject to sanction, which historically can mean anything from expulsion from the group to death by stoning. An individual who wishes to remain in a group is best served if he accepts the beliefs of the group.

    4. This example leads to my next problem with your argument. If you base the model on “True/Untrue,” any disbelief in a true statement can be reformatted to a belief in an untrue statement by a simple semantics shift. For example: […]

    What I’m primarily concerned about is the formation of new beliefs about the processes of the physical world. Our brains attempt to make sense of the physical world by forming models of its behavior–beliefs–which can be used to categorize and make predictions about the physical world.

    When we are presented with a set of statements about the physical world, such as “drinking soy milk can turn straight men into gay men” or “staying downwind of a leopard reduces the odds of being leopard lunch,” I think that adaptive pressures have favored the formation of a brain that accepts these statements over a brain that tends to be critical of these statements. It’s easier for us to accept statements and ideas about the physical world than it is for us to analyze them critically. As a result, we as a species have adopted uncritically vast bodies of statements and models concerning the physical world, including belief systems ranging from astrology to psychic healing, and continue to hold them despite overwhelming evidence that they’re bunk.

  18. For 1 through 3, you’ve answered to my and my partner’s satisfaction.

    I still have questions about 4, though:

    Your model still makes it possible to say “X is true” when it’s not or “Not-X is not true” when it is, and have the belief fall into two different failure categories while still saying the same thing. What I’m trying to get at is that your model claims a bias towards “belief in not-true things” over “disbelief in true things,” but if you can rephrase a belief in a not-true thing to be a disbelief in a true thing, how can you support that bias? (“Bias” not meant as “you’re biased” but as “your model claims a tendency towards A over a tendency towards B.”)

    For example: The “true” statement is: “The leopards are downwind of me.”

    The “belief in not-true” can be phrased as: “The leopards are upwind of me.”

    The “disbelief in true” can be phrased as “The leopards are not downwind of me.”

    The result is the same.

    This semantic problem is what I’m trying to work out, so that all your claims are supported. Right now the one I’m falling down on is the preference for belief in untrue over non-belief in true. To me they look equally likely, depending on how you phrase the statement.

    What I’m hoping to see is that statements of belief in untrue are, for you, NOT semantically interchangeable with statements of disbelief in true.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.