Some thoughts on computer security and credulity

So recently Business Week magazine ran an article about keylogger software being used in espionage. Essentially, defense contractors are being tricked into infecting their computers with keylogger malware, sent in targeted emails that appear to come from the Pentagon and other governmental sources.

The thing I find interesting about this, and also about things like the Storm and Kraken worms, is that they don’t take advantage of security flaws or vulnerabilities. They don’t attack holes in a computer’s operating system or applications, and they don’t rely on technical exploits of programming errors. These attacks all rely on tricking the victim into deliberately, intentionally infecting himself.

For that reason, I don’t think there’s a technological solution. The solution to a human gullibility problem isn’t in better programming or more elaborate firewalls; it’s in user education. No matter how sophisticated and bulletproof a security system is, there’s no defense against a person who deliberately chooses to permit someone through it.

But when it comes to the Intertubes, folks don’t get that.


If we had a situation where a criminal walked into a bank and, without weapons or violence, tricked a security guard into opening the vault for him and handing him all the money inside, we would not say “Oh, we need to build bigger vaults with thicker doors and more complicated locks!” It’s obvious to anyone who thinks about something like that that a bigger door or thicker walls won’t prevent someone from tricking a gullible guard into unlocking the door.

Yet with computer malware, we tend to jump on technological solutions. Someone in China tricks an American defense contractor into deliberately installing a key logger on his computer, and everyone says “We need tighter computer security and more computer defenses.” Which is as pointless and ineffectual as saying “we need thicker bank vault walls” if someone persuades the guard to intentionally, deliberately unlock the vault door and hand him the money.

What we need isn’t better computer security; better computer security will not and can not address this kind of problem. What we need is less gullible people.


A few weeks back, someone posted an ad on Craigslist saying that they were moving suddenly and they needed to get rid of everything in their house, including their horse. They said that the house would be unlocked and anyone who wanted to could come and take anything they liked. Hundreds of people showed up and ransacked the house, even taking light fixtures and plumbing fixtures.

Needless to say, the Craigslist ad was bogus. Some people had robbed the house earlier, then posted the ad to conceal the evidence of their robbery.

Of course, the police showed up, but what was most interesting was how indignant the folks who ransacked the house were. They were angry and upset that the police tried to stop them. Many of them waved printouts of the Craigslist ad around, as if it justified what they were doing. They genuinely, sincerely believed that the ad on Craigslist meant they were doing nothing wrong.

That’s the mentality a lot of folks–including folks who ought to know better, including defense contractors–have. They truly believe that if an email says it is from someone they know and they should download and run the attached program, it must be OK to do. They sincerely think that if they see it in an email, it can not possibly be false. And that gulllibility makes them easy to dupe.


These are not idiots. If a person walked up to them on a street and said “I live at 423 Main Street but I have to move in a hurry, so go into that house and take anything you like,” they’d be like “Yeah, right.” If someone walked into their office and said “I’m from the pentagon, take this CD and run the program that’s on it,” they’d never in a million years do it.

But because it’s on the Intertubes, somehow it gets past their bullshit filters, and they suspend their ordinary skepticism. And I think that’s really, really interesting.


One of my all-time favorite books is Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time, by Michael Shermer, who’s one of my personal heroes. I met him briefly at a science fiction convention last October, and he’s just as amazing in person as he is in print.

One of the things he talks about, and one of the things I’ve written about as well, is the idea of the brain as a “belief engine,” a tool for forming beliefs about the physical world. As a tool for survival, the brain works amazingly well, but survival pressures have tended to shape and mold it in such a way that its default state is to accept ideas uncritically rather than reject them. For our early hunter-gatherer ancestors, the consequences of accepting a false belief (“keeping this magic stone in my pocket will help me ward off evil spirits”) were generally less dire than the consequences of rejecting true beliefs (“a leopard is dangerous to me,” “keeping upwind of my prey will cause my prey to escape more often”), and so we have developed these amazing brains that find it much easier to accept than to reject ideas.

On top of that, our brains are so highly optimized for efficient and rapid pattern recognition that they can tend to see patterns even where none exist (“when I updated to OS X 10.4.11, my hard drive failed; the update was responsible for the failure”).


I wrote an essay about the belief engine a while back. I think that it applies to things like Internet hoaxes and Trojan-horse malware in part because we are wired by selective adaptation to accept ideas uncritically, but we are also taught from a young age when that kind of uncritical acceptance is dangerous.

Everyone (well, almost everyone) learns from an early age not to trust strangers. So if a stranger stopped us on the street and said “I live in the house at the end of the block but I have to leave, so walk on in and take whatever you like,” there’s no way we’d believe him. But we aren’t taught to distrust the Internet.


To make matters worse, I think the Internet confuses people by messing with the signs we have been taught to accept to mark trustworthy people and institutions. We are taught to separate folks within our sphere of trust from folks outside of it, but we are not taught that this trust doesn’t extend to the Internet.

So, for example, most of us trust our mothers. If we receive an email and it’s got Mom’s “from” address on it and claims to be a greeting card, we’ll likely download it and run it without a second thought, because we trust Mom. What we haven’t been taught is not to trust the From: address on any email. People don’t realize how easily that is faked; the email is trusted because it bears the mark of being from a person inside our sphere of trust, but that mark itself is untrustworthy.

Same deal for a defense contractor who receives an email that claims to be from his Pentagon contact. Because the email carries a mark of a person inside the sphere of trust, the email is accepted.

Phishing scams rely on that, too. We mostly trust our banks, and we are familiar with what our bank Web site looks like. So we associate things like the bank’s logo and the bank’s Web site layout, which are familiar and comforting, with that feeling of trust. We so strongly associate things like the bank’s logo witht he bank itself that just the appearance of the bank’s logo can make whatever it’s attached to seem trustworthy.

In contemporary society, this is intentional; businesses do a lot of work and spend a lot of money to associate things like logos with the business, and to attach the logo to our emotional response. But what that means is the logo and the familiarity of the Web site layout make us trust the fraudulent phishing site. These things are more important than, say, the padlock that shows a secure connection, or the URL of the site, because we have not been taught about those things but we have been taught to associate the logo with our feelings of trust in the bank, so that makes us fall for the scam Web sites, and we voluntarily turn over information that otherwise we would be unlikely to give to anyone.


So again what happens is that we see the Internet as a technological construction, and we seek technological solutions to security problems, when perhaps it might be more effective to see the Internet as a social construct, and teach people “never trust an email from anyone” or “never trust a Web site that does not show a padlock on it” the same way we teach people “don’t talk to strangers” and “don’t give your bank account number to people you don’t know.”

I’m not saying there’s no need for technological security, mind you. There are still folks who exploit technical flaws in computers, or who attack computers using technical attacks like DNS cache poisoning or DNS rebinding attacks. Securing computer networks is still a necessary thing to do, and on that score the Internet as it now exists gets pretty dismal marks.

But what gives the Internet its power is the way people use it, not the hardware that makes it up. It is a social construct; it’s essentially nothing more than a communication medium. And any time you have communication, you have the potential for cons and fraud. I really do think that we have not yet, as a society, learned to extend the same degree of distrust to the Internet as we have to things in “real life,” and as a result the natural tendency for us to believe rather than disbelieve is easily exploited on the Internet.