Quora: What Went Wrong?

I am more active on Quora than any other social media site. I’ve been there since 2012, in which time I’ve written over 66,000 answers that have received over 1.3 billion views.

It’s no secret that the site has gone steeply downhill recently, with wave after wave of scammers and, now, ch*ld p*rn profiles growing like a cancer on the site. I recently wrote a very long answer about why that is, and how Quora’s policies and procedures basically rolled out the red carpet for people selling ch*ld p*rn (there are now a number of organized CP rings active on Quora). Quora deleted that answer, so I’m re-posting it, with expansions and addendums, here.

If you read this on Quora before it was deleted, feel free to skip to the end, where I’ve added new material.


Why is Quora allowing itself to become a spam and porn site? There are lots of real porn sites without corrupting what used to be an intelligent debate forum. Also, too much scammer spam. Why aren’t the moderators doing their job?

The moderators aren’t doing their jobs because, and I say this as someone who has interacted with many moderators and high level admins and had many lengthy conversations with them, because they cannot.

I don’t mean they can’t as in they don’t know how to…well, no, that’s not true. Some of them don’t know how to.

Sorry, this answer got really, really, really long. It’s my analysis of the many failure modes of Quora leadership and moderation based on hundreds of interactions with Quora employees, moderators, and administrators, including cofounder and CEO Adam D’Angelo, about tens of thousands of Quora scammers and spammers. It’s also based on multiple security issues and bug reports I have made to Quora, and what happened after, and on being stalked, doxxed, and harassed on quora (and having my father and my wife doxxed and harassed on Quora), and what happened after.

But you asked, so here we go.

*** CAUTION *** CAUTION *** CAUTION ***

This answer is my opinion, based on my experiences with Quora. I do not work for Quora (well, I might as well do, with all the bug reports and reports of scammers I send them, but I’m not paid for it), I have not seen Quora’s back-end code, and I don’t have any insights into Quora’s management beyond my personal interactions with Quora admins. So take this with a grain of salt.

Problem 1: Absent Leadership

Let me start at the top. I’ve met Adam D’Angelo in person twice at Quora-sponsored events. In person, he comes across as an introverted, painfully shy dude with limited or no theory of mind and no real understanding of how social media works. Stick a pin in that, we’ll come back to it in a bit.

These days, he’s an absentee landlord. He’s on the board of directors of OpenAI, and pays very little attention to Quora these days.

And yet, at the same time, I’ve talked to Quora mid-level employees who have expressed frustration that they would love to implement technical solutions to address some of the worst problems they see with scammers and spammers, but they can’t do so without sign-off from upper management, which is pretty much absent. That’s one problem. Quora is, from a leadership perspective, a rudderless ship, adrift without a captain.

Problem 2: No built-in anti abuse defenses

I run a very small Mac troubleshooting forum, and I also run half a dozen blogs. All of those sites have simple anti-abuse measures like flood control, dupe control, and username control. That means I can, for example, ban creation of certain usernames. That means, with the click of a button, I can stop this from happening:

And I can stop this from happening:

Quora can’t.

These are all user profiles that are active on Quora right now. Quora literally lacks the capability to block usernames with certain words or phrases. It was never part of the codebase from the start.

Quora also cannot do dupe control (flagging or blocking when a user posts the same word for word identical content over and over and over) or flood control (flag or block when one user posts 80 times per second, which obviously means a spambot and not a real human being).

In 1997, I ran a forum for a few years that had automated, built-in username filtering, dupe control, and flood control.

In 1997.

This is what I mean when I say that Adam D’Angelo has no understanding of how social media works. He was the CTO of Facebook, and he does not have the slightest clue how people use social media, how people interact with social media, or how people abuse social media.

Problem 3: Buggy code riddled with security holes

In December 2018, hackers penetrated Quora using significant security holes and stole the entire Quora user database. They got everything, including passwords, because Quora stored the user passwords in plain text, not encrypted, on disk.

This is Security 101. You never, ever, ever, ever, ever, ever store passwords in plain text. The way every site, and operating system, stores passwords, and has since 1976, is you store passwords encrypted. When someone types a password, you encrypt it, then compare it to the encrypted password on disk to see if they are the same.

I had a TRS-80 as a kid in the 70s. It let you lock files on floppy disk with a password. It stored the password encrypted on disk so someone with a disk editor couldn’t find it.

Quora did not. Quora, a site with hundreds of millions of users, stored everyone’s password in plain text.

If that makes you deeply worried about Quora’s approach to security, you should be, because…

Problem 4: Quora’s codebase is an insecure mess

Quora has no Chief Security Officer. Quora’s codebase is riddled with security flaws, in part because they insist on writing their own code to do everything rather than using public libraries, and Quora’s developers from the earliest days onward did not know about and did not think about security. (See Problem 3. Nobody stores 100,000,000 users with plain-text passwords. Nobody.)

I have personally reported several security vulnerabilities that were actively being exploited to Quora. I’ve never heard back except for a bland “thank you for your bug report, we will pass it along to our developers.” In at least one of those cases, I saw the vulnerability being explited months after I reported it.

The vulnerabilities I reported all had to do with flaws in the way Quora handles Unicode.

Brief (I hope) technical digression about what that means: “Unicode” is a way to represent text characters. Computers were largely invented in the US and Britain, so they started out being able to understand only the uppercase and lowercase Latin alphabet, numbers, punctuation, and some special contol characters. That was it.

That means that for the first decades of the computer revolution, you could not type

Naïve

or

美丽

or

товарищ

For decades, you typed unaccented Latin characters or you typed nothing. No accented characters like the ï in naïve, no Cyrillic, sure as hell no Chinese.

Unicode was a system developed in the late 80s/early 90s to extend the old way that computers represented text, to allow for everything from accents to foreign-language alphabets to idiographic text to, later, “emoji” like 😮 and ✅.

The problem is that it had to be backward compatible with the old way to represent text or else every single computer program on earth ever written in English text would not work with the new system.

So the answer was a new way to represent text and symbols that still worked with the old system but added onto it to allow support for millions of characters, but that would still show old-fashioned characters right.

As you can imagine, Unicode is massively complex. Massively. Like unbelievably bogglingly complex.

Lots of people have written free open-source libraries for handling, storing, retrieving, and displaying Unicode. Quora refused to use them.

Instead, Quora wrote its own Unicode handling software. The thing about Unicode is that some characters are just represented by one-byte numbers (the uppercase letter A is represented by the number 97, or 61 in computer hexadecimal (base-16) numbers) and some are represented by two bytes (the lowercase a with a grave accent, à, is represented in Unicode as U+00E0), and some characters are represented as a list of instructions (basically “draw this letter and make these marks over it). Each mark is represented by a series of numbers.

That means that some Unicode combinations are illegal, not allowed, they don’t produce anything. These are called “invalid character sequences.” Invalid sequences are supposed to be detected and print as �.

Quora doesn’t do this. Because of bugs in how Quora handles Unicode, some invalid character sequences aren’t detected as being invalid. This is how trolls can create usernames that do not show up on Quora and can’t be clicked. If you see a troll answer where the name of the person who wrote the answer is just a blank, there’s nothing there, the troll is exploiting a flaw in Quora’s home-grown Unicode.

Worse, you can smuggle commands to Quora’s software by packaging the commands inside of invalid Unicode. This is similar to SQL injection but instead of wrapping the command in quote marks or SQL comment strings you wrap the commands in broken Unicode.

I’ve reported two different Unicode injection vulnerabilities to Quora. One of them was still actively being abused months later.

Problem 5: Quora does not take security or abuse seriously, and so Quora has become one of the favorite places for scammers and hackers on the Internet

Right now, Quora is struggling with a massive, staggering influx of people selling child abuse images.

I typically report anywhere from 100 to 300 or more romance scam and child abuse accounts to Quora every single day. I log and track every account I report. Yesterday I reported 164 accounts. 33 of those were offering child abuse images for sale, 23 were offering preteen child abuse images for sale, and 3 were offering toddler child abuse images for sale. I spend about an hour a day doing it and it makes me sick to my stomach but I cannot, I cannot stop doing it. I’ve tried. I just…I cannot see it and not do anything.

There is a site called Black Hat World. It is a site where scammers, spammers, computer virus distributors, ransomware distributors, child abuse sellers, and other scum and vermin get together to talk about ways to make the world a shittier place.

I sometimes read Black Hat World. They talk about Quora a lot on Black Hat World. They exchange tips and techniques for running scams and selling child abuse images on Quora. There are at least four organized child abuse rings operating on Quora right now [edit: five, I’ve found another], in addition to all the various random independent child abusers running on Quora.

Black Hat World loves Quora because of its combination of poor security, weak or nonexistent automated controls, and lax, permissive moderation. There are tutorials on Black Hat World for scammers and spammers wanting to do their thing on Quora. Actual step by step tutorials.

This all started because of this woman:

Well, not directly because of her, it wasn’t her fault.

This is Paige Spiranac.

Ms. Spiranac is a pro golfer and a model. Almost exactly two years ago, a romance scammer arrived on Quora and used stolen photos of Ms. Spiranac to run his romance scams.

I saw the account and reported it to Quora.

Nothing happened.

I reported it again.

Nothing happened.

I reported it a total of eleven times.

Nothing happened.

I emailed Ms. Spiranac’s agent and said, “hey, just so you know, your client’s identity has been stolen and her photo is being used as part of a romance scam operation on a social media site called Quora, here’s the profile that is using her photo.”

The next day I got a very polite email from Octagon Agency, the company representing her at the time, thanking me for my email. The day after that, the scam account was taken down, I assume because Ms. Spiranac sent Quora a legal DMCA takedown order.

But it was too little too late.

The scammer running the account ran to Black Hat World and was like “hey, everyone, there’s this site called Quora that permits romance scammers!” and the floodgates opened.

Now here’s the thing:

Any site that allows romance scammers will get flooded with romance scammers, obviously. But as the concentration of romance scammers rises, pretty soon there are tons of scammers competing for the same pool of lonely, gullible victims.

So the scammers start specializing. A new wave of scammers arrives who try to scam people with very specific tastes. They’ll pretend to be trans women to appeal to trans chasers. They’ll pretend to be BDSM dominants to try to scam thirsty, gullible subbies. They’ll pretend to be foot fetishists to appeal to people with foot fetishes.

If that second wave goes unchecked, then the third wave arrives, people who pretend to be underage children in order to appeal to…well, you know.

If that third wave goes unchecked, the child abuse rings are like “oh my God this site permits romance scammers that pretend to be children, we have free reign” and the fourth wave is people selling child abuse images.

This is exactly what played out on Quora.

It took about eighteen months between that one scammer going to Black Hat World and saying “hey everyone, run your scams on Quora” and the child abusers arriving in force.

There’s a lesson here: If you run a social media site, and if you do not crack down immediately and hard at the first sign of romance scammers, you will, you will attract child abusers. It’s inevitable.

At this point, Quora cannot keep up. Of the four child abuse rings I’ve seen here, each makes on average about 20 new profiles a day. You can tell who they are because they all use the same contact information for purchasing their child abuse images. You can tell they’re using bots because they all use word for word identical profiles, the same usernames, and the same images over and over again.

Remember Point 2: No built-in anti-abuse measures. Quora has no automated way to detect identical profiles, nor to block or flag based on certain usernames or certain strings in the profile descriptions. That means Quora moderators are having to do manual searches.

And they’re bad at it. Say a child abuse ring uses the name “Tina.” (This is an example; to my knowledge, they don’t.) They’ll use a bot to create identical profiles over and over. They might, for example, be

Tina-1207
Tina-1208
Tina-1209
Tina-1210
Tina-1211
Tina-1213

and so on.

Quora moderation will ban Tina-1209 and Tina-1211 but leave the others, because you have to do a hand search to find the others and it’s tedious.

That leads to two more problems:

Problem 6: Quora’s back end tools are badly broken

I’ll give you an example:

On my own Quora space, I will often write about the child abuse profiles I report to Quora. These posts often get deleted by Quora moderation.

If Quora would delete child abuse profiles as aggressively as it deletes Spaces posts about child abuse on Quora, we wouldn’t be here, but moving on:

When Quora moderation deletes a post in a Space, when I appeal, there’s a little dance I have to do.

Quora will usually send an answer that says “We cannot undelete this content because a Spaces admin deleted it.”

Then I send back “no, you deleted it, look at this” with a screenshot that clearly says Quora deleted the post.

Then I get an answer that says “we’re so sorry, our back-end administration tool shows that you deleted the post, it’s a bug in our moderation tools, we will undelete it” and they fix it.

I’ve done this over. And over. And over. And over.

They know there’s a bug in their moderation software, one that wrongly displays to Quora moderators that a Spaces post that was deleted by Quora was actually deleted by a Space admin.

You have to keep reminding them about this bug over and over because different employees handle the appeals and each employee doesn’t know about the bug so you have to tell them “look closer, there’s a bug in your software” and they’re like “Oh! Look at that, you’re right!”

They have never fixed the bug.

They have never trained their staff that the bug exists.

Every time, you’re starting from scratch because this poor training means Quora has no institutional memory of the flaws and bugs in their own site administration software.

This same sloppy, shoddy approach to their back-end tooling exists at every level of the Quora stack from top to bottom.

For example, a few days ago I went through another little dance with Quora moderation. I had an answer deleted for spam. Then I appealed, and it was undeleted. Minutes later, it was deleted again.

10:36: I got an email saying they’d looked at the answer and decided it wasn’t spam.
10:38: They undeleted it.
11:03: They deleted it again.

I appealed again and it was undeleted again. This morning, it was deleted again.

Quora’s tools have no provision for a human moderator saying “Quora moderation bot, we’ve looked at this answer, it’s fine.”

That costs Quora money, because every time this happens, a Quora moderator has to stop what he’s doing, check the answer again, and undelete it again.

There are a ton of other, more subtle flaws, too.

After Quora deletes a child abuse profile, they sometimes delete the profile description, which usually contains an address to buy child abuse images, and sometimes they do not; the profile will stay deleted by the profile description advertising child abuse images for sale, and the address to buy them, will remain.

I asked a Quora admin about this. I got a replay telling me it was a problem in their moderation tool and they’re “aware of it and working on it.”

What’s worse is that they never delete the profile Credentials, so the child abuse rings have learned to put the ads for child abuse images inside the credentials, where they remain visible even if the profile is banned.

I wrote a rather angry email to Quora admins about this and here’s what I got back:

Here’s the thing:

This is wrong. This is not correct. You do not have to visit the deleted profile by a direct link to see this. The screenshot above is not a direct link to the profile. A deleted profile’s credentials remain visible in countless places through Quora, including in other users’ Followers and Following lists.

Quora’s own admins and moderators DO NOT KNOW HOW QUORA OPERATES.

I don’t believe this Quora employee was trying to lie to me. I believe this Quora employee honestly, seriously doesn’t understand how Quora’s software works.

Problem 7: Quora’s moderators are incurious and not proactive, probably because they’re overworked and underpaid

Say you report a profile like Keanu-Reeves-359 for impersonation.

Quora admins will delete it. What they will not do is say “oh, if there’s a fake Keanu Reeves #359, I wonder if there is a fake Keanu Reeves #358. And a fake Keanu Reeves #357. And a fake Keanu Reeves #356.”

Nope. They will delete Keanu Reeves #359 and move on.

This is especially bad with the child abuse profiles.

If you report two profiles, one a child abuse profile that is using the name Tina-1208 and another, created a few milliseconds later and identical to it called Tina-1209, they won’t go “huh, a bot is making child abuse profiles one right after the other like a machine gun. I better look at Tina-1207 and Tina-1210, too.”

Nope.

They also don’t stop and ask themselves what profile names mean if they aren’t in English.

I reported this troll profile 7 times. The first time I reported it, it was banned a few hours later. I reported it six more times after it was banned because, well, see for yourself:

Quora policy forbids hate speech in usernames. When a profile whose username contains hate speech is banned, Quora is supposed to delete the username as well.

Which they usually do. If the username is English.

Six more times I reported this profile, explaining what the username means in English. Six more times they did nothing.

Why did I keep reporting it after it was banned?

Finally, finally, after seven reports, finally, after I emailed my Quora contact directly with a screenshot of the user profile AND a screenshot of Google Translate, finally Quora removed the username:

Quora is totally fine with a username “We Must Exterminate the Jews”…as long as it is not in English.

These problems, broken tools and incurious admins, arise from the next problem:

Problem 8: Quora has no money for, or apparently interest in, paying moderators, hiring developers, or fixing the toolchain

Quora started out with no revenue model. When Quora was first founded, it was pitched to investors as a site that would collect and distill human knowledge and make it searchable.

In 2019, it had a valuation of $2 billion.

Then ChatGPT came along and overnight iQuora lost three-quarters of its valuation, from $2 billion to $500 million, because investors were like “why would someone ask Quora if they can ask ChatGPT?”

That’s why Adam D’Angelo pivoted to AI and why he now sits on the board of OpenAI. It’s why Quora is a rudderless ship.

In 2021 or thereabouts, Quora started to run out of money. With the advent of LLMs, the venture capitalists didn’t see the value in Quora anymore. Its valuation collapsed by 75%. The VCs closed the money spigots and Quora was left to sink or swim on its own.

Quora responded by…

…firing the moderation team.

Adam is pitching an AI moderation bot for sale to other social media sites.

This AI moderation bot cannot look at usernames and ban based on users calling themselves Keanu Reeves or Elon Musk.

This AI moderation bot cannot say “this Telegram username is associated with a seller of child abuse images so I will flag or delete posts where this Telegram username appears.”

This AI moderation bot cannot automatically spot and ban profiles called “Fuck All N—-rs.”

Quora keeps trying to train their AI moderation bot to spot things like fake Keanu Reeves profiles or child abuse profiles using LLMs or whatever because once you’ve scaled to hundreds of millions of people and billions of posts, it becomes difficult to add basic features like flood control or username filtering after the fact.

They could do it, but it would be expensive, so they’re left trying to fine-tune their recipe for chicken cordon bleu while the entire kitchen burns down around them.

I’ve had so many conversations about the romance scam problem and the child abuse problem with everyone from frontline Quora employees to high-level Quora admins and I 100% believe that nobody, nobody at Quora, nobody understands the scale of the problem, nor how hard it is to get rid of these people once they’ve established a presence.

I actually have more to say, there are at least three more points in my head I could make including a significant worldview issue on the part of Mr. D’Angelo, but I’ve already spent hours on this answer and it’s way, way longer than a Quora answer should be.

If you’ve read this far, congratulations! Welcome to my world. As a user who genuinely loves Quora, it’s disheartening and kind of sickening.

I do love Quora. Quora’s been good to me. I’ve met so many people who have become personal friends in the real world outside Quora. I’ve met a lover and co-author here.

But it’s getting harder and harder to stay. I reported a string of profiles selling child abuse images of toddlers—toddlers!—yesterday and it made me want to throw up. When I was done I had to leave the house and go to a coffee shop to get the stain out of my head. It’s wearing me down and I still can’t stop, because if I’m not reporting these, who is?

tl;dr: Quora was founded by someone who doesn’t understand computer security or social media. Quora has never, ever been proactive about preventing abuse. As a result, Quora never implemented the most basic front-line security or anti-abuse measures, measures that were available in free open-source software in 1997, and now lacks the resources to address the problem.

Quora’s own employees also don’t understand Quora itself, their own software, or the scale of the problem in front of them.

I’ve saved this post. In the event Quora deletes it, which I put at about a 50/50 chance, I will make it available on my blog.


So that’s the Quora answer.

After I posted this, it was deleted by Quora admins, then undeleted, then deleted, then undeleted, then deleted again. As I type this right now, it’s still deleted, but I’ve filed another appeal so it will be interesting to see if it gets undeleted again.

Whilst it was available, several folks asked if I would expand on the part where I said I have more points to make, so here they are:

Problem 9: Quora’s algorithm is broken

Like most social media sites, every Quora user sees a different feed. There’s too much content to show anyone the firehose directly, so the Quora algorithm listens to your interactions to learn what content you want to see. For example, if you downvote content, Quora tries to show you less of that kind of content. If you upvote content, Quora interprets that to mean you would like to see more like that. The more you interact, the more Quora tunes your feed.

Trouble is, Quora sometimes gets its wires crossed.

Quora interprets downvoting and muting as negative signals, and commenting and upvoting as positive signals. But bizarrely, it interprets using the Report feature to report users or content as a positive signal.

If you report lots of romance scammers, you start to see more and more romance scammers. If you report spammers, you see more spammers.

Even worse, Quora sends customized “digests” in your email. I get a digest full of stuff that Quora thinks I might like to see in email every day. Usually it’s full of answers on topics like science or linguistics or computers or math.

Lately it’s been full of romance scammers.

I want you to take a step back and let the magnitude of that sink in. Quora sends out romance scam content in emailed digests. Today’s digest included nine pieces of content. Three of them were romance scam posts.

Problem 10: Quora is remarkably tolerant of sexual abuse

Amazon AWS is one of the largest Web hosts and storage engines on the planet. A staggering amount of content, including Quora itself, runs on AWS.

Whatever you may think of Amazon (and there’s plenty to dislike about Amazon), Amazon is fanatical about dealing with ch*ld p*rn. Amazon despises child abuse.

Amazon donates a tremendous amount of money, millions a year, to support the National Center for Missing and Exploited Children (NCMEC).

Amazon maintains an internal team, separate from their normal abuse team, to deal solely with reports of child sexual abuse on their networks.

Amazon, as a matter of policy, logs and tracks every single child abuse report it receives. This information, again as a matter of policy, is forwarded to Amazon contacts within the FBI, and to NCMEC.

Amazon maintains a database of child abusers, and hashes of child abuse images, which it makes available to law enforcement.

Amazon does not fuck around when it comes to child abuse. They have an ultra-strict policy, and they will strike down with great vengeance and furious anger anyone who uses their network for child sexual abuse. Hosting CP on Amazon is like calling down a targeted missile strike on your own location.

Quora, which is hosted on Amazon AWS…does not.

If you create a profile, or five profiles, or a hundred and fifty profiles, on Quora offering child sex abuse materials for sale, Quora will (well, I say will, Quora might) ban your account. It will not do anything beyond that.

The sellers of child abuse materials on Quora know that they need fear no repercussions beyond having their accounts banned…and maybe not even that. They operate brazenly and boldly on Quora, even posting profiles that literally say “CP for sale here, all ages available!”, because they know nothing will happen to them.

Why the pizza emoji? The slice of pizza emoji has become something of a universal signifier of those selling child abuse images. CP: Cheese Pizza. CP: Ch*ld P*rn. Get it?

How did Quora get here? What systemic failures led Quora to be the Internet’s hotspot for romance scammers and ch*ld p*rnographers?

Problem 11: Ayn Rand

Adam D’Angelo, Quora’s cofounder and absentee CEO, is the kind of Big-L Libertarian who mainlines Ayn Rand directly into his veins.

He’s one of those techbro Libertarians who believes, I mean really truly believes, that the solution to bad speech is more speech, as if more speech is a magic wand that somehow magically erases bad actors, scammers, spammers and ch*ld p*rnographers.

His fundamental worldview is one where acting against any speech, even “we have pictures of toddelers being raped and would you like to buy them?”, is anethema.

I believe this is why Quora has no built-in mechanisms to prevent any Tom , Dick, and Harry from creating an account called “Elon Musk” and putting up posts offering free Bitcoin if you just deposit money into an account to, you know, pay for “fees.” It’s why you can create an account called Keanu Reeves or Sandra Bullock and the system will just let you do it, because hey, we wouldn’t want to risk the real Keanu Reeves making an account and running into some kind of barrier, right? It’s why there are thousands of fake Keanu Reeves and thousands of fake Elon Musks and so on, and why Quora’s moderation, what’s left of it, is purely reactive and not proactive.

The problem is, we’ve seen over and over and over again that this approach does not work. It’s empirically not true. But it’s a religious idea among a certain kind of techbro; they want it to be true, so they treat it as Revealed Gospel, never to be questioned.

Any site that doesn’t take action against romance scammers becomes a chld prn site

Image: Melpomene on DepositPhotos, Karich on Depositphotos

I am, as many of you know, an active user on the question and answer site Quora, where I’ve been posting since June 2012.

I just sent a very long email to a contact I have at Quora admin, with a cc to Quora’s legal team and the founder/CEO’s personal email address.

I suppose I should have known it was coming. In January od 2023, almost exactly two years ago, I saw my first romance scam account on Quora. It used a photo of golfer and model Paige Spiranac to try to separate lonely men from their money. I reported the profile to Quora moderation 11 times, without any result, so finally, on January 22, 2023, I emailed Ms. Spiranac’s agent. I received a polite reply on January 23, and the bogus profile was banned on January 25, so I assume Ms. Spiranac’s team sent a DMCA takedown.

Too little, too late. The message came through loud and clear: “Quora has weak moderation that is tolerant of romance scammers.”

The floodgates opened. Today, Quora is the Internet’s Ground Zero for romance scammers; there are tens of tousands of fake profiles. I report every one I encounter. A few months back, Quora admins asked me to stop reporting them one at a time, so now I note the profile URLs and report them all in one go at the end of the day, typically 200-300 a day.

Universal law of social media:

Every site that doesn’t take action against romance scammers inevitably becomes a ch*ld p*rn site.

It happens in stages.

First, a romance scammer discovers a site. He (almost all romance scammers are “he”) sets up a profile. It doesn’t get banned. He tells his buddies, who also set up scam profiles. Word spreads.

Pretty soon, there’s a huge number of romance scammers, all fighting for the same pool of lonely, gullible marks.

They start “sniping:” one scammer will start commenting on other scammers’ profiles, trying to cut in on marks who respond to scam posts. They start angling for niche marks rather than shotgunning a general approach: some will pretend to be trans women, some will pretend to be heavy women to try to attract “chubby chaser” marks; some will pretend to be BDSM dommes, looking for kinky marks.

Then come the ones using stolen photos of underage children.

If those profiles remain without getting banned immediately, that sends a signal to the ch*ld p*rn community: This site is tolerant of exploitation of minors.

That’s when they move in: people offering CP/CSAM images for sale. They use all kinds of euphemisms: “cheese pizza” (CP), “hot yummy pizza images.”

At first, these are individual low-level sellers. If these accounts remain without being banned, then the organized CP rings move in.

That’s the background.

This morning, I set a lengthy email to my contact in Quora administration. I sent a cc to Quora’s legal team and to Quora’s CEO.

In the past few weeks, the number of profiles openly advertising CP for sale has skyrocketed. Yesterday, I found three organized CP rings operating scores of profiles on Quora.

I call these CP rings the “Evelyn ring,” the “Mornay Ivan” ring, and the “Purple Knott” ring, because of the profile names and the Telegram addresses they use. Out of respect to the victims whose images are being exploited, I’ve pixelated and blacked out the images of the victims; the CP profiles don’t.

The “Evelyn” ring:

The “Mornay Ivan” ring:

The “Purple Knott” ring, which seems to specialize in child bestiality:

Every day I report these. Every day Quora bans most (not all) the accounts I report. Every day there are more. Even though these rings create identical profiles with identical content.

Being stalked on Quora didn’t put me off the site. Getting death threats on Quora didn’t put me off the site. Being doxxed on Quora didn’t put me off the site. Having my content plagiarized didn’t put me off the site. This? This might put me off the site.

Mailchannels: Best friend of scammers, phishers, and spammers

In November of last year, I noticed something interesting.

For the past three years, the #1 source of spam reaching my email inbox has been Salesforce, which bought out a bulk email provider called ExactTarget quite some time ago, and took off all the constraints. ExactTarget customers were, post-acquisition, permitted to spam, and the abuse team stopped enforcing anti-spam policies. Result: spammers flocked to SalesForce (hey, SalesForce needed to make back the $2,500,000,000 they spent on ExactTarget somehow!) and my inbox was flooded with crap.

Starting last November, however, the flood of crap from Salesforce dropped to second place. The new #1? An outfit called Mailchannels.

As near as I can tell, Mailchannels is now the preferred email delivery service of choice for the lowest of the low: scammers, people sending fake phish emails to steal passwords, romance and Nigerian prince fraud, you name it.

Over the past few weeks, 46 of the 48 phish emails I have received (95.8%) came through Mailchannels. 100% of the Nigerian prince scam emails I’ve received? Mailchannels. 100% of the romance scam I’ve received? Mailchannels. 92% of the spam overall? Mailchannels.

I took a screenshot of the Mailchannels emails I’ve received a while back, and the results are rather grim:

Wow, that’s a lot of scam, fraud, and phish emails! With percentages like that, Mailchannels must be so proud.

There’s a particularly delicious irony here. See the highlighted entry at the bottom, the one in blue? I have been reporting all the spam emails to Mailchannels. That is a bounce email, when I reported a computer virus I received through Mailchannels. It bounced.

In other words:

Mailchannels knew the email was malware. They sent it to me anyway, but refused to accept it themselves.

Which really tells you everything you need to know about this organization.

What is Mailchannels?

Mailchannels is an “email delivery company.” In English: You pay them money, you send an email to hundreds or thousands or tens of thousands of email addresses, and they do everything in their power to make sure your emails don’t get flagged as spam.

A list of their services includes:

  • Sending emails from “clean” IP addresses not in any spam blocklists.
  • Switching the servers an email comes from should emails start getting flagged as spam
  • Using scalable cloud servers to send vast quantities of emails

In other words, if you’re sending Nigerian scam or romance scam or password phish emails, which have a very low rate of return, a service like Mailchannels is exactly what you want.

How do they respond to spam reports?

Ah HA ha ha ha ha ha ha ha ha ha.

I’ve sent hundreds (literally) of spam reports to Mailchannels. Every single one received the same reply:

From: Swathi Karun <skarun@mailchannels.com> Re: Spam Hi, Thank you for contacting MailChannels support. I have taken necessary action against the reported abuse activity. Thank you for your time and attention to this matter.

And the spam still rolls in. Every day, often from the same spammer with the same content. They don’t even block phishers who send identical phish emails through their servers over and over again.

It cannot possibly be more clear: Mailchannels is a bulletproof spam service provider, that through deliberate action or negligence permits their service to be used by the lowest criminals on the Internet.

What can you do?

Mailchannels doesn’t care. They know they’re in the spam business; they make money from delivering phish and scam emails. They don’t accept spam reports from spam-fighting services like Spamcop.

And repeated emails to Mailchannels abuse doesn’t do anything. There’s one email phisher in particular who sends out fake emails to Dreamhost customers, trying to steal their webhosting passwords; I’ve received more than two dozen of these phish emails from this same phisher through Mailchannels, reported every one, and they keep rolling in.

Fortunately, emails from Mailchannels are easy to spot. If you view the headers, you’ll always find a line like this near the top:

I strongly recommend setting up an email filter using your email program. If the headers contain the word “Mailchannels,” auto-delete the email. Your inbox will thank you.

Today in “Horrifying Cyberpunk Dystopia”

I sleep in a loft bed, to make more room for my computers and one of my 3D printers, which I keep under the bed.

I needed a new floor lamp, and because I’m lazy, I wanted something I could turn on and off remotely without climbing out of bed. So I found a floor lamp on Amazon that advertised remote control capability.

Imagine my surprise when I opened the box and found no remote, just a QR code to download a smartphone app.

Buckle up, because this story is about to take a turn that would make William Gibson cringe.

My first hint something was wrong came when the app forced me to create an account on the manufacturer’s server before I could pair pair with the lamp.

But hey, I wanted to see how deep the rabbit hole went, so I made an account. The answer is “pretty deep.”

Once you pair over Bluetooth, the next thing you do is download your WiFi password to the lamp. You also must enable location services, so the lamp knows your location. (The software won’t work if you don’t.)

Once the lamp knows your location, you have a choice to make. It asks if you’d rather use the microphone in your phone, or the one built into the lamp.

Yes, you read that right. The lamp connects to your WiFi and your phone, knows where you are, and has a built in microphone.

Once you’ve made that particular Hobson’s choice, the app asks you to upload a selfie, so it can—get this—run facial recognition and AI expression analysis.

Why? So it can suggest a lighting scheme based on your mood.

The Terms of Service allow the manufacturer to store your face and do both facial recognition and AI analysis.

I uploaded a photo of a cat rather than my selfie.

You’re then connected to a community of other lamp users, so you can exchange lighting patterns and such…because, of course, it is a truth universally acknowledged that a person in possession of a floor lamp must be in want of a way to exchange lighting suggestions with complete strangers.

Here’s the light it suggested based on AI analysis of a cat.

The lamp was originally slated to arrive from Amazon on Monday, but when Monday came I got an email telling me that delivery was delayed and it would arrive on Tuesday.

Were I of a paranoid bent, I might believe that the delay allowed a government three-letter agency to intercept the shipment so they could do a supply chain attack, rerouting the lamp’s connection to the host servers (which is a really weird thing to say, if you think about it) through them as well.

George Orwell believed in a future where the government constantly watched the citizens, recording every detail of their lives. George Orwell didn’t know about outsourcing.

Webmasters beware: Fake DMCA Scam

NOTE: This blog post was updated on January 25, 2025. Update at end.

If you own a website that uses stock images or even images you’ve taken yourself, beware a scam floating around that tries to trick you into putting links to another site on your pages.

I recently received a phony “DMCA Copyright Infringement Notice” run by a scammer attempting to get backlinks to a site called KnowYourSins, a sex site run by two people named Samuel Davis (@Samueld_KYS on Twitter) and Olivia Moore (@Olivia_kys on Twitter).

The letter claims to come from a law firm called “Commonwealth Legal Services” in Phoenix, Arizona. Here’s a screenshot:

So, the first thing to know about this email is it’s very unusual for a DMCA complaint, which is almost always a takedown request, not a request for a backlink.

The second thing to notice is there’s a standard format for DMCA takedowns, and they must, by law, include:

  • Information reasonably sufficient to permit the service provider to contact the complaining party, such as an address, telephone number, and e-mail address.
  • A statement that the complaining party has a good faith belief that use of the material in the manner complained of is not authorized.
  • A statement that the information in the notification is accurate, and under penalty of perjury, that the complaining party is authorized to act on behalf of the copyright holder.

The image itself comes from Unsplash, specifically this one, and it was taken by Eric Lucatero, who has no connection with KnowYourSins dot com.

Huh.

Commonwealth Legal Services

I looked at the website of the supposed “law firm” that sent it, justicesolutionshub.info. Now, the fact that it uses a .info top-level domain immediately set off warning bells in my head as well.

“Zoe Baker” signs this email “Trademark Attorney,” yet the page on justicesolutionshub.info lists “her” as a “business legal consultant.”

Huh.

On top of that, notice anything funny about all these headshots? Look closely.

Yup, they’re all generated by AI—specifically, they all come from This Person Does Not Exist.

How can you tell?

AI deepfake faces generated by This Person Does Not Exist always have eyes in exactly the same place exactly the same size and exactly the same distance apart. It’s a limitation of the adversarial GAN software that creates the fake faces.

You can see it if you stack the faces on top of each other and make them translucent in Photoshop.

I looked up “Commonwealth Legal Services” on Google. It turns out there are a bunch of different websites at different URLs all using the same exact web design with the same copy and the same pictures: justicesolutionshub.info, cwsolutions.biz, elitejusticeadvisors.biz (currently offline), and more.

The front page of justicesolutionshub.info shows a photo of a building. The office building is a stock photo rendering that you can put any logo in front of.

This is an Adobe Photos stock photo rendering created by digital artist “Esin.” A surprising number of phony fly-by-night bogus “companies” use this stock image as their corporate headquarters on their About or Contact pages.

Things really take a turn for the surreal if you put the address of “Commonwealth Legal Services,” 3909 N. 16th Street, Fourth Floor, Phoenix, AZ 85016 into Google Street View. This one weird trick produced results you aren’t going to believe:

Note the conspicuous absence of a fourth floor. As of the time of writing this, the building is currently listed for sale.

Okay, so we have a fake DMCA takedown request from a phony law office attempting to blackmail me into putting a backlink to Know Your Sins from my site.

Know Your Sins

So, what is Know Your Sins?

It’s a more or less generic BDSM information site with precious little in the way of in-depth information, using largely AI-generated content and stock photos.

I can see a couple of possibilities:

  1. Know Your Sins is scamming in a desperate bid to attract backlinks and improve their search engine ranking.
  2. Know Your Sins is a victim; they hired a dodgy “we can boost your search engine ranking” scammer, not knowing that he was engaging in fraud.

I emailed the contact address at Know Your Sins, hello (at) knowyoursins (dot) com, to try to get some insight. So far, as of the time of writing this, I have not received a reply. I will update this blog post if they get back in touch with me.

I’ve also been in touch with several webmasters who have received identical DMCA complaints, at least one of whom was accused of pirating a photo he took, all with demands to link back to Know Your Sins.

The Know Your Sins domain registration is hidden by Privacy protect. I’ve filed a formal complaint with them, since they claim they’ll rescind the privacy protection on sites that engage in spamming or fraud. (I urge anyone who’s received one of these scam emails to do the same using the “report abuse” form here.) If they reply, I’ll post the results.

Isn’t there a penalty for false DMCA takedown requests?

No. Perhaps surprisingly, there isn’t.

There are penalties for impersonating a lawyer, and for fraud. The emails are definitely fraud, and I do not for even half a second believe the person sending them is a lawyer, so there may be avenues of legal action there. I suspect, given that others are reporting these emails but they don’t always demand a link to Know Your Sins (some of them demand links to other sites), that what’s most likely happening is a scammer is selling his services to desperate website owners who want more Google linkbacks but don’t care too much if they’re totally on the up and up.

The lesson here

Genuine DMCA takedown requests must follow a certain specific legal format (including a statement that under penalty of perjury, the person sending the request has a good-faith belief that the claimed infringement is genuine), and don’t ask for linkbacks.

If you get a “DMCA warning” or “DMCA takedown” that asks you to link to another site, you’re being scammed.

If you’ve received one of these fake takedown requests, I’d love to hear from you! I’m in the process of trying to strip the Privacy Protection from the knowyoursins domain registration, and the more examples I have, the better. Please feel free to email me at franklin (at) franklinveaux (dot) com.


UPDATE JANUARY 25, 2025

A lot of people have sent me copies of similar fake DMCA emails demanding linkbacks to knowyoursins dot com. The site is registered at GoDaddy. This morning, I had a long and interesting conversation with a member of the GoDaddy abuse team, who has told me that GoDaddy is opening an investigation into knowyoursins dot com for fraudulent DMCA takedowns and fraudulent backlink farming.

Have you received a “DMCA takedown” demanding a link to knowyoursins dot com? GoDaddy’s abuse team would like to hear from you.

Please visit the GoDaddy abuse reporting form at

https://supportcenter.godaddy.com/abusereport

Create a new report, choose the “Phishing” option, and in the details section, put a copy of the fraudulent email you received, with a brief explanation that you are reporting the site for fraudulent DMCA takedowns and fraudulent backlink farming.

And, of course, I’d love to see copies of the fraudulent emails you’ve received.

2024: The Year of Infinite Infosec Fail

First up in today’s game of “who fed it and who ate it:” Artificial Intelligence.

AI is everywhere. AI chatbots! AI image generators! And now, AI code assistants, that help developers write computer programs!

Only here’s the thing: AI doesn’t know anything. A lot of folks think these AI systems are, like, some sort of huge database of facts or something. They aren’t. They’re closer to supercharged versions of the autocomplete on your phone.

Which means if you ask an AI chatbot or code generator a question, it does the same thing autocomplete does: fills in syntactically correct words that are likely to come after the words you typed. There is no intelligence. There is no storehouse of facts it looks up.

That’s why AI is prone to “hallucinations”—completely imaginary false statements that the AI systems invent because the words it uses are somehow associated with the words you typed.

AI Fembot says: The Golden Gate Bridge was transported for the second time across Egypt in October of 2016. (Image: Xu Haiwei)

So, code generation.

AI code generation is uniformly terrible. If you’re asking for anything more than a simple shell script, what you get likely won’t even compile. But oh, it gets worse. So, so much worse.

AI code generators do not understand code. They merely produce output that resembles the text they were trained on. And sometimes, they hallucinate entire libraries or software packages that do not exist.

Which is perfectly understandable once you get how AI LLMs work.

What’s particularly interesting, though, is that malware writers can write malware, give it the same name as the packages AI code generators make up out of thin air, and devs will download and install them just because an AI chatbot told them to.

Bet you didn’t have that on your “Reasons 2024 Will Suck” bingo card.

And speaking of things that suck:

I woke this morning to a message from Eunice that a popular, trusted developer had inserted malicious code in an obscure Linux library he maintains, code that would allow him to log in and access any Linux system that his library is installed on.

In February, then again in March, the developer released updates to a library called “XZ Utils.” The update contained weird, obfuscated code—instructions that were deliberately written in a manner to conceal what they did—but because he was a trusted dev, people were just like 🤷‍♂️. “We don’t know what this code he added does, but he seems an okay guy. Let’s roll this into Linux.”

He seems a decent fellow. We don’t know what this code does, but what’s the harm? (Image: Zanyar Ibrahim)

Fortunately it was spotted quickly, befure it ended up widely used, so only a handful of bleeding-edge Linux distros were affected, but still:

What the actual, literal fuck, people??!

“This library contains obfuscated code whose purpose has been deliberately concealed. What’s the worst that can happen?”

Jesus. And it’s only March.

Developers should never be allowed near anything important ever.

The Lads from Cyprus: Now on Quora!

Back in March 2016, eight years and one day ago, I published an analysis of a spam ring advertising phony pay-for-play scam “dating sites.” This particular group was responsible for about 90% of the “Hot Lady Wants to F*ck You” spam in circulation. The spam contained links to hacked sites that the spammers placed malicious redirectors on, that would redirect to other sites that redirected to other sites that redirected to a site that would promise sex and ask you a bunch of questions about what you were looking for, then take you to the actual scam site.

I called these guys “the Lads from Cyprus” because invariably the scam dating sites were registered to a shell company organized in Cyprus.

Times have changed, and the Lads from Cyprus have changed with them. While they still do send spam emails, I rarely see them any more—perhaps six or eight times a year, where I used to see them multiple times per day.

Instead, they’ve moved on…to Quora.

The Quora Connection

I spend most of my time on Quora these days. A few years back, I started noticing a certain type of profile: large number of profiles with consistent behavior: a profile pic of a hot woman in a kind of blandly generic Instagram pose, answering questions at an enormous rate (sometimes once a minute or more), with the answers all being a sentence or so that might or might not be related to the question, but that always included a photo of a scantily-dressed woman.

The profiles look like this:

The links (“Latest Nude Videos and Pics,” “Hookup [sic] with me now”) all lead to domains that are registered on Namesilo, usually with ultra-cheap TLDs like “.life,” that—rather amazingly—are still using the exact same templates I saw in 2016.

Go with what works, eh?

Anyway, these sites ask you a bunch of questions, tell you you’re about to see nude photos, then redirect you to a scam dating site—in this case, one called onlylocalmeets.com”—where you will immediately see a direct message request the moment you connect, though of course you’ll need to pay if you want to receive it.

It’s actually kind of amazing to me that they’re still running the same scams essentially unchanged, using the same templates they used eight years ago. They’ve clearly got this down to an art—the redirection sites even do some spiffy geolocation and collect as much information from your browser fingerprint as they can before sending oyu off to the scam site.

There are at least hundreds, possibly thousands, of these fake profiles on Quora, all of which use stolen photos of Instagram models, and all of which link back, through various intermediaries, to the same scam dating site.

I started recording the scam profiles in a Notes file. I deliberately didn’t go out searching for them; instead, I just browsed Quora as I normally do, and made a note whenever I encountered one of these scam profiles (and if I was in the mood, did a reverse image search to see whose photos were stolen for that profile).

There are…a lot of them.

Based on what I’ve seen, I’d say probably 800 on the low end and 1,500 on the high end.

One of them even used stolen Instagram photos of pro golfer and model Paige Spiranac. When I reverse image searched the photos, I looked up the email address of her agent (who was easy to find) and sent an email saying “hey, just so you know, your client’s photos are being used in a catfishing scam, here’s the link.” The profile was banned a few days later, so maybe she or her agent filed a DMCA takedown request.

I find it interesting that this organized spam gang is still at it, still running the same scam they’ve been running for at least ten years, but always looking for new ways to find fresh crops of victims.

I also find it interesting that it works. These scam profiles quickly end up with thousands, sometimes tens of thousands, of followers.

And finally, if you’ve ever wondered what it’s like to be a woman online, just look at the comments to the spam posts, which range from the drearily predictable:

To the completely unhinged:

(And what is it with these people not knowing the difference between “your” and “you’re”? You can be a completely deranged psycho who abuses women online or you can spell, but not, it seems, both.)

To the…well, I don’t know what the fuck this is. I’ve deliberately cropped off this fellow’s username.

Jesus, I do not understand why any woman would ever voluntarily go online.

On the one hand, it’s kinda hard to feel sorry for some of these blokes, who will no doubt be fleeced of all their money. That particular combination of toxic entitlement toward access to women’s bodies and aggressive stupidity makes it really hard to sympathize with the folks being ripped off here.

On the other, any scam is wrong, regardless of the victims it targets.

fly.io, SMS spam, and malware

[Edit 11-Jan-2023] I’ve received a reply from Fly.io; see end of this entry

Ah, a new year has come. Out with the old, in with the new…strategies for phish and malware sites, that is.

And what would phish and malware sites be without complicit webhosts and web service providers?

So today I’m going to dive into an enormous quantity of SMS text message spam I’ve been flooded with over the past couple of months, who’s behind it, and what it’s doing.

It started in mid-November of last ear (2023), with a text message saying “The USPS package arrived at the warehouse but could not be delivered” and a link to a site that was just a random collection of letters and numbers. No biggie, I get these all the time. Standard run of the mill phish attempt. If you visit the link, you’re taken to a site that looks like the Post Office, but it’s a fake, of course. They ask you to type a bunch of personal information, which the people responsible will use to steal your identity, get loans in oyur name, whatever.

Then I got another. And another. And another. And another. And then dozens more, coming in one, two, three, four, sometimes five or more a day.

And they haven’t stopped.

Text message after text message after text message. “You’ve been infected with viruses.” “Your cloud service has been terminated.” “We couldn’t deliver your package.”

All of them with URLs that looked like random strings of letters and numbers.

So my spidey sense was activated, and I looked up all those URLs.

Surprise, surprise, every single one is hosted on the same web service provider, an outfit called fly.io.

And there are a lot of them.

*** CAUTION *** CAUTION *** CAUTION ***
THESE LINKS ARE LIVE AS OF THE TIME OF WRITING THIS. Many of these links will bring you to malware or phish sites. DO NOT visit these links if you don’t know what you’re doing.

I started collecting the URLs from the text messages:

  • http://eonmpxm.com/OR73bg5L
    FakeAV malware site
  • http://wkcetku.com/G1LO5X38
    Fake “government subsidy” site
  • http://nztkspy.com/MK2RVeJg
    FakeAV malware site
  • http://lkxsxef.com/KJeQ09Vp
    FakeAV malware
  • http://klxnitq.com/oxp18G47
    Equifax phish
  • http://epgguli.com/0M37VmkO
    McAfee phish
  • http://yonxutn.com/1MZbOrZv
    FedEx phish
  • http://zveeyou.com/7Xy1E8G8
    FakeAV malware
  • http://mirumbf.com/KJeQ09Vp
    FakeAV malware
  • http://mirumbf.com/KJeQ09Vp
    FakeAV malware
  • http://qjkwmww.com/yng4eExR
    Fake USPS phish
  • http://wnddwet.com/KJe40qm5
    FakeAV malware
  • http://pdxftwt.com/ER39R0rR
    XFinity phish
  • http://plefaas.com/rNzdEAEW
    FakeAV malware
  • http://oitbaon.com/A3B6vBOe
    FakeAV malware
  • http://napiyib.com/nQ0mJKoZ
    FakeAV malware
  • http://kozqtlp.com/vGeO0XmX
    Xfinity phish
  • http://ugokulc.com/KJM89Mem
    USPS phish
  • http://iqbyojt.com/KJeQ09Vp
    FakeAV malware
  • http://sobagiw.com/nQVA0bVp
    Xfinity phish
  • http://oosjrjt.com/GRG8ML9n
    FakeAV malware
  • http://xqzfnuh.com/ZjgL4GbE
    Xfinity phish
  • http://tecvxzo.com/5aannZO7
    Google phish

I notified fly.io’s abuse team about the problem. And notified them. And notified them. And notified them. Each time, I received an identical reply, from a guy calling himself “Matt Braun,” saying only “I have let our customer know. Thanks!”

Matt Braun doesn’t appear to have grasped that their customer is the phisher. And lately, I haven’t even received these replies; they haven’t acknowledged recent abuse reports in days. Meanwhile (of course) all the links remain active because (of course)…their customer is the phisher.


Okay, so how does the scheme work?

I’ve spent some time mapping out the network. The quick overview:

  1. A text message is mass broadcast, advertising a URL on fly.io.
  2. Marks who click on the link in the message are redirected to a site called “track.palersaid.com,” hosted on Amazon AWS. Track.palersaid.com looks at the incoming fly.io URL, the type of computer or smartphone you’re using, and probably other stuff, then sends you on to another site.
  3. This site, track.hangzdark.com, is another tracking and redirection site also hosted on Amazon AWS.
  4. From there, marks are redirected to the actual target site, which might be a fake FedEx page, a fake UPS page, a fake “virus scan” page, or more. There are a lot of these destinations: read.messagealert.com, kolakonages.com, aca.trustedplanfinder.com, and more. Some of these destination sites are, no surprise here, hosted on Namecheap, which is in my opinion one of the scuzziest of malware and spam sewer hosts.

Example destination page

How the network works

This bears a strong resemblance to some of the malware and spam networks I’ve mapped out in the past, though the delivery network (SMS text messages) and the web service provider (fly.io) are different.

If you get these text messages, do not follow the links. If you are also seeing these messages, please let me know in a comment! I would love to know how big this network is. Fly.io seems reluctant to shut down these phishers, which leads me to wonder if they aren’t making quite a bit of money from them.


[Edit 11-Jan-2023] I’ve received a reply from Fly.io’s Abuse team:

Thank you for your patience with us over the holiday, and some follow up details.

Usually, when we have reports of spammer or abuser on our platform, our internal systems have a host of signals that we can look to to verify the report and take the appropriate action. In the vast majority of cases the signals are clear and unequivocal. However, in this instance, the signals were entirely the opposite: all signs pointed to a seemingly-legitimate user.

Our systems are set up for “either you are a customer or you are not”, and banning a customer would mean immediate and irrevocable loss of that’s customers data. That’s is not a risk we take lightly so we were not going to flip the switch and risk blowing away someone’s information without a smoking gun. I expect you and I have both seen dozens of those posts on Hacker News or elsewhere where an innocent user writes “Company has deleted my entire account without warning and I’ve lost years of data”. We don’t want to do that to someone.

So where does that leave us? The apparent reason for the behavior/signal disconnect is that it was our customer’s customer doing the abuse. Our customer has committed to evicting their customer today which should put an end to the redirection through our systems (though, unfortunately, I don’t expect that’ll have any impact on the SMS spam). If it doesn’t resolve things, let us know. We’re back online after the holiday and more in a position to chase things things down.

Additionally, there were two other concerns we need to address internally:
1) We don’t have the ability to suspend users. This is something that I’m going to pursue as we need something more nuanced than our all-or-nothing approach so that we’re able to move on complaints sooner without risk of harming someone innocently caught in the middle of things.
2) We did not follow up with the customer as often as we should have after their initial acknowledgement of the problem and indication that they would address it. That’s a coordination process breakdown exacerbated by people taking time off during the holidays and not having the usual “obviously-abuse” signals. Additionally, we need to come up with an approach to our abuse ticketing system that allows for long-lived cases.

You can email me, personally, if you feel you aren’t getting attention on this (email redacted) and I’m sincerely sorry for the delay in letting you know where things stood or getting things sorted with the customer.

It seems Fly.io is one of the good guys.

The spam stopped for a few days, though it has resumed again. This time, the SMS spam domains are hosted on Alibaba rather than Fly.io.

Hacking as a tool of social disapproval

“The street finds its own uses for things.” —William Gibson, Burning Chrome

Last year, my wife, my co-author, and I launched a new podcast, The Skeptical Pervert. We talk about sex…and more specifically, we talk about sex through a lens of empiricism and rationality.

The Skeptical Pervert’s website runs WordPress. Now, I’ve been around the block a few times when it comes to web security, and I know WordPress tends to be a rather appetizing target for miscreants, so I run hardened WordPress installs, with security plugins, firewalls that are trained on common WordPress attack vectors, and other mitigations I don’t talk about openly.

I run quite a few WordPress installs. My blogs on franklinveaux.com and morethantwo.com run WordPress. So does the Passionate Pantheon blog, where Eunice and I discuss the philosophy of sex in a far-future, post-scarcity society. In addition, I host WordPress blogs for friends, and no, I won’t tell you who they are, for reasons that will soon become clear.

I automatically log hack attacks, including failed login attempts, known WordPress exploits, and malicious scans. I run software that emails me daily and weekly statistics on attacks against all the WordPress sites I own or host. I also subscribe to WordPress-specific infosec mailing lists, so I am aware of the general threat background.

Because WordPress is such a common target—it’s the Microsoft Windows of the self-hosted blog world, with everything that implies—any WordPress site will get a certain low level of constant probes and hack attempts. It’s just part of the background noise of the Internet. (If you run WordPress and you’re not religiously on top of security updates, by the way, you’ve already been pwn3d. I can pretty much guarantee it.)

The fact that I host WordPress sites not connected with me to the outside world gives me a good general baseline reading of this background noise, that I can use to compare to hack attacks against sites that are publicly connected with me.

And the results…well.

In all the years I’ve been on the Web—and I started running my own Web sites in the mid-1990s—I have never seen anything even remotely close to the constant, nonstop barrage of attacks against the Skeptical Pervert site. Joreth and Eunice are probably quite sick of my frequent updates: “Well, the firewall shows over a thousand brute-force hack attempts against the Skeptical Pervert site so far today and it isn’t even noon yet” (seriously, that’s a thing that happened recently).

Here’s a graph showing what I mean. This graph covers one week, from June 13, 2022 to June 20, 2022. The “baseline” in the graph is an average of several WordPress sites I host that aren’t in any way connected to me in the eyes of the Internet at large—I don’t run them, I don’t put content on them, my name isn’t on them, I merely host them.

Note that the attacks don’t scale with traffic; the More Than Two blog has the most traffic, followed by franklinveaux.com, then the Passionate Pantheon blog, then the Skeptical Pervert.

So what to make of this?

Part of it is likely the long-running social media campaign my ex has been running. Attacks on franklinveaux.com and morethantwo.com increased in the wake of her social media posts.

But that doesn’t explain what’s happening with the Skeptical Pervert, which has turned out to be targeted to an extraordinary degree.

Now, I don’t know who’s attacking the site, or why, so this is speculation. It’s hard to escape the idea, though, that when a site and podcast explicitly about sex, co-hosted by two women of color, talking about non-traditional sexual relationships is targeted, at least part of the answer might simply be the same old, same old tired sex-negative misogyny and racism we see…well, everywhere, pretty much. The fact that my ex doesn’t like me (and will say or do anything to get other people not to like me) doesn’t explain what’s happening here.

It’s easy to blame conservative traditionalists, but Eunice reminded me there’s another factor at work as well. The Skeptical Pervert approaches sexuality from a rational, evidence-based, skeptical lens. In the United States, there’s a stubborn streak of misogyny amongst the dudebros of the skeptics community. A podcast with two women that looks at sex from a highly female-focused, feminist point of view taking on the mantle of skepticism? It’s possible there are dudebros who will perceive that as an encroachment into their space.

In short, I don’t think this is about me. I think this is about women talking openly about real-world non-traditional sex, and getting the same pushback that women always get when they dare to do that.

If the podcast were just me, or me with obviously male co-hosts, I don’t think the level of Web attacks would be anywhere near the same.

The street finds its own uses for things. In the hands of people threatened by or frightened of non-traditional voices, the Internet has become a safe, anonymous tool of harassment.

Chasing Down a Malware Network

A few days ago, I leveled a Horde frost mage to max level in World of Warcraft. Anyone familiar with the game knows exactly what happens next: the mad scramble to gear up a new Level 60 to be able to run mythics and raids, so that you can get even more loot to run higher-level mythics and raids…thus does the MMO hamster wheel go ’round and ’round.

So I did what every newly-minted level 60 does, of course: I turned to Google. My new 60 has a rather abysmal heirloom staff, so my first priority was finding the best way to loot better weapons.That’s when it started.Take a look, dear readers, at this Google search, and see if you can tell me what’s peculiar.

These results outstrip some of the most popular WoW sites on the Net, which is a bit peculiar itself…but more to the point, what are they doing on a site about pilates? And a German photography site? And why are they all called “untitled”?

Curious, and smelling something weird and sinister, I did what I always do when I see something that might be the tip of some kind of mass hack or compromise: I clicked on the links.

And each one of them bounced me back to a new Google page.Even more curious, I copy-pasted one of the links (after unmangling it, of course; damn you, Google, for mangling link URLs in your search links), and saw:

This is a “keyword stuff”—a page designed to appeal to Google, not to any human reader, simply by being crammed full of popular Google keywords and search phrases.

But look at the bottom of the page. It’s a bunch of randomly-generated three-character links.Curiouser and curiouser.Now well and truly engaged, cup of tea forgotten next to my keyboard, I logged out of WoW and fell down the rabbit hole.

Where do those links point? To other pages stuffed with keywords, of course.

This is how these results ranked so high in Google Search, above even well-regarded WoW sites like Icy Veins: Automated black hat SEO. Each page is populated with automatically-generated links to other pages also stuffed with keywords, which in turn point to still other pages stuffed with keywords…at least hundreds, possibly thousands, in all.

But why?The ‘why’ is suggested by some very peculiar behavior of these pages.

Continue reading