Identity Theft

Questionable intelligence: AI as a propaganda tool

Many times when I hear new information I feel like something is off but I’m not quite sure what it is. For example, when I learned that Israel had been the first nation to recognize Somaliland that felt off. Then I heard that’s where they plan to deport Gazans. Hmm, still seems dubious. Then I saw Carlos Latuff’s political cartoon above clarifying how useful Somaliland could be for attacking Yemen. Ok, now I think I understand.

When the dubiously named “Artificial Intelligence” burst on the scene I wondered at the oxymoronish terminology that quickly gave way to the acronym AI. Artificial sugar, artificial turf, and artificial faces a la Mar-a-Lago face turned out to be as yucky as their names suggested: carcinogenic, more productive of sports injuries, and just downright ugly despite being insanely expensive, respectively.

AI was supposed to offer the ability to review vast troves of knowledge and produce useful ideas based on synthesizing what was “learned".” This led quickly to several critiques of AI including the bromide from the early days of digital technology: “garbage in, garbage out.” In other words, if you feed bad data into a computer you will get bad (unreliable, untrue) results.

To give an example, if the elongated muskrat sets up Twitter’s AI to mostly parrot his own tweets about current events, how is that even useful? Other than for fooling 9th graders who have a research paper to write? As he gave it the embarrassingly stupid name Grok — based on the iconic Robert Heinlein novel Stranger in a Strange Land that boomers read back in the day — perhaps it’s not a good example. Grok, as I recall, was a verb meaning “understand something so deeply that you become one with that thing” (Zen Buddhism and its concepts were big in the 60’s.)

An online comment I can’t now find but that has stayed with me about AI was something to the effect, If you’re asking ChatGPT questions like What would a reasonable response to xyz sound like? it is literally going to produce that. Don’t get mad when it’s not the truth. You didn’t ask for the truth, you asked what a reasonable response would sound like.

So as with most technologies the skill of the user will be key. I recall in 2023 reading an article that claimed I, personally, had organized the nationwide campaign Rage Against the War Machine when all I had actually done was organize my state’s participation in said campaign. An editor friend explained that the article was most likely generated by AI with no human being doing the labor required for reading comprehension.

https://futurism.com/artificial-intelligence/ai-police-report-frog

China predictably launched its own AI system, Deep Seek, that is both cheaper and far less of an energy hog than the Silicon Valley versions that are draining water tables as we speak. In at least one case China has built a data center in the ocean in order to use sea water for cooling the servers churning out “intelligence” — because said servers churn out at least as much heat as they do intelligence. That solution is also likely to be ecologically disastrous. (Do I need to explain why boiling the ocean is not a good idea?)

One of today’s many paradoxes is that we’ve watched corporations protect their content against use by people who have not paid for the privilege, and lived through several copyright infringement dramas around consumers downloading music or movies without paying to do so. But AI scrapes their data and your data and my data — even data we thought we had protected, like our health records — to achieve the volume of input needed to make it work. Graphic artists, musicians, and writers are indignant that their labor is effectively stolen to fund someone else’s profits. Copyright be damned! say the AI barons.

Which leads us to the problem and, I suspect, the deep reason that AI is being shoved down our throats at every turn these days: deep fakes. These are images, often videos, produced by AI to impersonate someone and distort their ideas in order to fool viewers. Narrative managers for the empire despaired when so many of us turned away from legacy broadcast and print media and toward alternative media for information and analysis. One response has been to buy up social media platforms like TikTok or even mainstream platforms like CBS News in order to bring content there back under control. The other response has been to create AI videos of Yanis Varoufakis or John Mearsheimer saying things that they in fact did not and would not say.

I’d like to think I could tell the difference but I’m probably wrong about that. (My saving grace may end up to be the fact that I dislike watching videos for informational purposes and almost always choose reading over viewing.) Reports are that Mearsheimer had to watch for a few minutes before even he could be sure that it was a deep fake version of himself. But neither of us are very young or very inexperienced consumers. What of the middle school kids growing up in a jungle of fakes interspersed with reality? How will they learn to tell the difference?

I’m pretty sure imperial narrative managers are hoping they won’t.

We were always warned about identify theft in terms of someone impersonating us in financial transactions, which is a particular form of fraud. I feel much more concerned about someone impersonating me to fool people into thinking that I, for example, support Israel’s genocide in Palestine. I used to think that my online presence and record of publication would protect me from being misrepresented. In fact, the opposite may turn out to be true.

Only someone with no online presence is safe from this form of identity theft. And in an age where thought crimes are being punished by economic sanctions, incarceration, and even assassination how dangerous is it to be impersonated?

Or maybe the confusion inherent in life lived inside a hall of mirrors could actually keep us safe?