AI Considered Silly (and Harmful)

I don’t know when it happened. I know when I noticed it. I was using the Facebook app on my phone while I was in Florida working on getting a solar battery setup in my wife’s RV.

“Huh, what’s this?” I thought as I looked through the posts on my profile. “There are a bunch of buttons beneath each post, asking followup questions.” So I clicked one.

Dear God.

So you know how ChatGPT will spout the most absolutely flat-out bonkers bullshit in this weird, bland, “corporate email meets the Institute of Official Cheer” voice? Like asserting with confidence that Walter Mondale graduated from Princeton University (he didn’t), or inventing hyperlinks to imaginary reviews of a Honda motorcycle that doesn’t exist?

Meta, in its ongoing effort to cram LLMs into every orifice of the great throbbing pustulent Facebook experience, is wedging LLM chatbots, often with the aid of a crowbar, onto the bottom of Facebook posts (but only, at least so far, in the app; I don’t see this on the browser).

And the things it imagines are sometimes…weird.

I was called for jury duty a couple of weeks ago. The waiting room featured a stash of complimentary fidget spinners (yes, seriously). Something Facebook’s AI insisted wasn’t the case.

It got way weirder, though, when I posted that the first drft of my first novel with my talespinner was done:

AI invented a question that it couldn’t answer, then answered it with nonsense. “I don’t know who Kitty Bound is, so let me ramble about unrelated authors who go by ‘Kitty.’” And the thing is, the question buttons are invented by the AI.

It doesn’t know who Kitty Bound is (understandably, this is the first novel we’re attempting to get published together), but it will cheerfully say “click here to learn more about Kitty Bound” and then say “Kitty Bound’s work isn’t well-represented in search results, so ima go Hal 9000 with ADHD and tell you things about completely unrelated people.”

Would you like to know how to make an omelet? Yes? Well, I can’t tell you how to make an omelet, but here’s a paragraph about maintaining gas-powered wood chippers.

And the thing is, Facebook is the shining example of AI success.

Facebook is one of the very few companies doing more than forklifting venture capital dollars into a furnace by the pallet. The proponents of AI say it’s going to change the world, and they’re right…just not with hallucination engines designed to pass the Turing test. (I used to think the Chinese room critique of AI was nonsense; now I’m not so sure. I might write an essay about that at some point, check this space.)

AI is making crazy money for Facebook, but not in chatbots. They’re using AI engines to drive ad placement, consumer segments, and demographic analysis of their ads, and it works. About two or three years ago, Facebook suddenly started showing me ads that I’ve never seen before, for products I’ve never shown any interest in as far as I know…and I, get this, started buying from Facebook ads.

AI, in the right context, works.

But that sort of AI isn’t sexy. It doesn’t get column inches in newspapers. Chatbots do…but for all the wrong reasons.

My Talespinner and I may have invented the genre of hyperurbanized retrofuturist court-intrigue gangster noir. Do a search for that phrase and you’ll get three results, of which (checks notes) three are by us. Chatbots can be forgiven for not knowing what that is, but hot damn, it doesn’t stop them from spouting confident-seeming nonsense about what it is. This is some classic Chinese room shit.

And don’t get me started on whatever this fresh bucket o’ slop is:

If that’s not silly enough, try this:

Want even sillier? How about this:

“I was cranky because I had to drive overnight.” AI: “Why was I cranky? You were cranky because you had to drive overnight.”

This would be silly if it weren’t for the fact that GenAI is almost unbelievably expensive, needing a trip through the entire neural network for each token generated. The server farms that ooze this pap are warmed by furnaces that burn hundred-dollar bills.

That’s the big problem here. The AI chatbots don’t pay for themselves, not even close. There’s no business case for them: 95% of companies inviesting in AI don’t show positive returns. There are currently 498 AI startups valued at over a billion dollars, with a combined valuation of $2.7 trillion, even thugh most are producing zero profit and have little hope of producing profit any time in the future.

That’s ludicrous.

It’s not worth $2,7700,000,000,000 to tell people “why were you cranky when driving overnight made you cranky? Because you get cranky when you drive overnight.”

On top of the economic cost, there’s a social cost as well. Scammers, spammers, fraud artists, conmen, and political adversaries use LLMs to refine and hone their message for maximum emotional manipulation. Political activists use GenAI to create deepfakes. We as a society do not have a cognitive immune system that can deal with this, and I think it will be generations before we do.

But hey, in that brief moment before they go bankrupt, 498 people will be paper billionaires.