2024: The Year of Infinite Infosec Fail

First up in today’s game of “who fed it and who ate it:” Artificial Intelligence.

AI is everywhere. AI chatbots! AI image generators! And now, AI code assistants, that help developers write computer programs!

Only here’s the thing: AI doesn’t know anything. A lot of folks think these AI systems are, like, some sort of huge database of facts or something. They aren’t. They’re closer to supercharged versions of the autocomplete on your phone.

Which means if you ask an AI chatbot or code generator a question, it does the same thing autocomplete does: fills in syntactically correct words that are likely to come after the words you typed. There is no intelligence. There is no storehouse of facts it looks up.

That’s why AI is prone to “hallucinations”—completely imaginary false statements that the AI systems invent because the words it uses are somehow associated with the words you typed.

AI Fembot says: The Golden Gate Bridge was transported for the second time across Egypt in October of 2016. (Image: Xu Haiwei)

So, code generation.

AI code generation is uniformly terrible. If you’re asking for anything more than a simple shell script, what you get likely won’t even compile. But oh, it gets worse. So, so much worse.

AI code generators do not understand code. They merely produce output that resembles the text they were trained on. And sometimes, they hallucinate entire libraries or software packages that do not exist.

Which is perfectly understandable once you get how AI LLMs work.

What’s particularly interesting, though, is that malware writers can write malware, give it the same name as the packages AI code generators make up out of thin air, and devs will download and install them just because an AI chatbot told them to.

Bet you didn’t have that on your “Reasons 2024 Will Suck” bingo card.

And speaking of things that suck:

I woke this morning to a message from Eunice that a popular, trusted developer had inserted malicious code in an obscure Linux library he maintains, code that would allow him to log in and access any Linux system that his library is installed on.

In February, then again in March, the developer released updates to a library called “XZ Utils.” The update contained weird, obfuscated code—instructions that were deliberately written in a manner to conceal what they did—but because he was a trusted dev, people were just like 🤷‍♂️. “We don’t know what this code he added does, but he seems an okay guy. Let’s roll this into Linux.”

He seems a decent fellow. We don’t know what this code does, but what’s the harm? (Image: Zanyar Ibrahim)

Fortunately it was spotted quickly, befure it ended up widely used, so only a handful of bleeding-edge Linux distros were affected, but still:

What the actual, literal fuck, people??!

“This library contains obfuscated code whose purpose has been deliberately concealed. What’s the worst that can happen?”

Jesus. And it’s only March.

Developers should never be allowed near anything important ever.

One thought on “2024: The Year of Infinite Infosec Fail

  1. Are the devs told by their bosses to obey the AI chatbot? That’s the only reason I can think of why they’d do that.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.