The Commenters Are the Slop

How a two-word meme became a mind virus—and what forty years of manufacturing perception taught me about the people who caught it

March 24, 2026

So here’s where we are.

I’ve spent four decades making images for a living. I built a membrane—durable, permeable—that lets me evaluate visual information with a degree of discrimination that was tested, daily, by some of the most demanding clients and audiences on the planet.

I now apply that discrimination to a new set of tools, on a YouTube channel that exists to help people see through the manufactured perceptions that saturate modern life.

And what I’m seeing, in my own comments section, is a real-time case study of the very thing I’m trying to teach people to recognize. A marketing term (”artificial intelligence”) has been absorbed into common usage, shaping belief without anyone noticing. A meme derived from that term (”AI slop”) has achieved escape velocity, replicating across the culture.

The meme functions as a thought-terminating cliché, foreclosing the analysis it pretends to perform. And the people deploying it have become, unwittingly, the most effective distribution channel for the very frame they think they’re fighting.

The real slop isn’t the images. The real slop is the degradation of our capacity to think clearly about what’s in front of us—the replacement of discrimination with labels, of analysis with memes, of perception with reflex.

That’s the corruption of the informational commons. And it’s happening in plain sight, two words at a time.


Here’s a thing that happens now, with a regularity that has become its own kind of data.

I’ll spend three weeks building a video essay. I’ll write a script that traces, say, the financial mechanics of a potential AI bubble collapse, sourcing it from Goldman Sachs research and Sequoia Capital memos and Federal Reserve working papers. I’ll construct the visual architecture with the same compositional instincts I once used to light Gwen Steffani or frame a Korn music video that would go on to win an MTV Video Music Award. I’ll generate images in a style I’ve spent months refining—a specific pre-digital surrealism drawn from the design houses of the 1960s and ‘70s, Hipgnosis and Pentagram and Push Pin Studios, images that are deliberately, almost aggressively analog in their aesthetic DNA. I’ll sequence these images so that each one does what the best infographics have always done: make an abstract idea visible, make a dense argument feel like something.

Then I’ll publish it. And within hours, sometimes minutes, a comment will appear:

“AI slop.”

Two words. No elaboration. No engagement with the argument. No evidence that the commenter watched past the thumbnail. Just a meme, deployed like a stamp on an envelope, and then the commenter moves on—presumably to stamp the next envelope, and the next.

I want to tell you what I’ve come to understand about those two words. Not because my feelings are hurt—though I’d be lying if I said the repetition wasn’t grating—but because I’ve spent forty years in the business of manufacturing perception for a living, and what I’m watching happen in my comments section is, once you see it clearly, one of the more elegant and disturbing examples of the very phenomenon my channel exists to expose.


Let me tell you who’s looking at these comments.

I got my first camera at fifteen—a Canon AE-1, birthday present. That was 1978. By twenty-one I was working professionally. Over the next three and a half decades, I shot motion pictures on Panavision and Arriflex cameras, 35mm and 16mm film. I made images for Nike and Bank of America. I was the cinematographer on music videos for Green Day, Korn, No Doubt, Avril Lavigne. I apprenticed (as the 1st Assistant Cameraman) on films you may have seen—Basic Instinct, Se7en, The Game. I worked in oil painting, charcoal, watercolor, still photography in formats from SX-70 Polaroid to large-format 8x10.

I’m not listing this to impress you. I’m listing it because the relevant fact isn’t the résumé—it’s what the résumé produces in a person over time.

When you spend decades making images at that level, for those clients, under that kind of scrutiny—Fortune 100 executives, ad agency creative directors, record label heads, feature film directors, and eventually millions of anonymous viewers—you develop something I’ve come to think of as a membrane. Not a wall. Not a moat. A membrane

It’s durable. It has to be. You can’t do this work if every piece of criticism sends you into a spiral. But it’s also permeable. It has to be, because the moment you stop taking in new information—the moment your confidence calcifies into defensiveness—you’re finished. You become the aging cinematographer still lighting scenes like it’s 1987, wondering why nobody’s calling.

So the membrane filters. It lets through what’s useful and blocks what’s noise. And the ability to tell the difference—between a genuine critique that contains information and a reflexive dismissal that contains none—is not incidental to the craft. It is the craft, or at least a large and underappreciated part of it.

When I look at the images on my channel, I’m looking through that membrane. And what I see are images made by someone applying four decades of visual literacy, compositional instinct, and semiotic training to a new tool. Are they different from what I’d produce with a Panavision camera and a full crew of grips and electricians and a hundred-thousand-dollar lighting package? Of course they are. They’re also different from what I’d produce with a charcoal stick, and nobody would argue those two media should be evaluated by the same criteria.

The question isn’t whether AI image generation is the same as traditional cinematography. It obviously isn’t. The question is whether the person directing the tool has the discrimination to use it well. And on that question, I’ll put my judgment up against the anonymous commenter’s any day of the week.


I didn’t write this essay to defend my images.

The images are fine. Anyone who actually watches the videos can see that.

I wrote this essay because the comment “AI slop” is doing something far more interesting and far more troubling than criticizing my work. And understanding what it’s doing requires the same set of skills I spent a lifetime developing—the ability to see through a manufactured surface to the machinery underneath.


Let’s look at the machinery.

In December 2025, Merriam-Webster named “slop” its Word of the Year, defining it as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” A few weeks later, the American Dialect Society did the same in a vote of over three hundred attendees at its annual meeting. Macquarie Dictionary in Australia had already named “AI slop” its pick. The Economist joined the chorus.

Four major lexicographic institutions, independently, converged on the same term. That’s not a coincidence. It’s a signal—evidence that the phrase had achieved a kind of escape velocity, replicating across the culture faster than anyone could track.

Which is to say: it had become a meme. Not in the internet sense of a funny picture with Impact font, but in the original sense—the one Richard Dawkins intended when he coined the word in The Selfish Gene in 1976. Dawkins proposed that ideas could function like genes: self-replicating units of cultural transmission that spread through imitation, competing for space in human minds the way genes compete for space in the genome. A meme’s “fitness” was determined not by whether it was true or useful to its human hosts, but by whether it was good at replicating.

Susan Blackmore took this further in The Meme Machine in 1999. Her crucial insight was that a meme’s replicative fitness and its host’s welfare can diverge. A meme can be phenomenally successful—spreading everywhere, colonizing every comments section—while actively degrading the quality of thought in every mind it inhabits. Blackmore called clusters of mutually reinforcing memes “memeplexes”: bundles of assumptions that travel together, each one making the others easier to accept.

“AI slop” is not just a meme. It’s a memeplex. When someone types those two words under my video, they aren’t just making an aesthetic judgment. They’re importing, in compressed form, an entire package of unstated beliefs: that the images were generated without human discrimination. That they appeared spontaneously, as if the AI had its own creative agency. That the person who made them exercised no more judgment than someone hitting “copy and paste.” That the presence of AI in the workflow is, by itself, disqualifying—regardless of the skill, intention, or visual literacy applied to the output.

None of these beliefs are stated. None of them are argued. They arrive pre-loaded in the meme, like malware in a Trojan horse. And they replicate not because they’re accurate, but because they’re easy—because they replace the difficult work of actually evaluating an image with the simple, satisfying act of applying a label.


There’s a name for this mechanism. In 1961, the psychiatrist Robert Jay Lifton published Thought Reform and the Psychology of Totalism, a study of brainwashing techniques used in Mao’s China. Among the techniques he catalogued was one he called the “thought-terminating cliché”: a brief, reductive, definitive-sounding phrase that compresses complex problems into something easily memorized and easily expressed. Lifton wrote that such phrases become “the start and finish of any ideological analysis.” He called them “the language of non-thought.”

Lifton was describing political indoctrination. But the mechanism he identified operates wherever language is used to foreclose inquiry rather than open it. And that is precisely what “AI slop” does when deployed in a comments section. It terminates thought. It takes a video that might contain a layered argument about, say, the financial structure of the AI industry, and reduces the entire thing to a two-word dismissal that requires no engagement with the content, no evaluation of the images, no consideration of the argument. It is the start and finish of the analysis. It is the language of non-thought.

Now, here’s where it gets interesting.


Consider what you’re actually saying when you type “AI slop.”

You’re saying: this artificial intelligence produced slop.

Pause on that for a moment. If it’s intelligent—even artificially—what kind of intelligence produces slop? That’s like saying “the brilliant chef made garbage.” Either the chef isn’t brilliant, or the food isn’t garbage. The two claims, compressed into one phrase, are in tension with each other. They want to pull apart.

But they don’t pull apart, because the phrase has become so familiar that nobody stops to examine it. This is precisely the mechanism that George Lakoff, the cognitive linguist at Berkeley, spent his career studying. Lakoff demonstrated that language doesn’t merely describe reality—it frames reality, activating conceptual structures that shape what we’re able to think. His famous example: the phrase “tax relief” frames taxation as an affliction. Once that frame is activated, arguing for taxes within it is nearly impossible—you’re arguing for affliction. The frame has done its work before the debate even begins.

“Artificial intelligence” works the same way. The term was coined in 1956 by John McCarthy for an academic conference—it was a research aspiration, not a product description. But the current generation of AI companies has repurposed it as a sales pitch, and every time anyone uses the abbreviation “AI,” they’re activating a frame that includes the assumption of intelligence. You don’t have to believe the assumption consciously. You just have to use the word. The frame does the rest.

Edward Sapir and Benjamin Lee Whorf argued nearly a century ago that the language habits of a community “predispose certain choices of interpretation”—that how we speak shapes, subtly but persistently, how we perceive. Whether or not you accept the strong version of their hypothesis, the weak version is hard to deny: terminology influences cognition. And the terminology surrounding these large language models has been engineered—not by linguists, but by marketing departments—to produce a specific cognitive effect.

This is what I mean when I say the term “artificial intelligence” is, at bottom, a marketing construct. Not in its origin, but in its current deployment. And it works. It works so well that even the people who think they’re criticizing the technology can’t escape the frame. When they say “AI slop,” they’re granting the premise. They’re accepting, without examination, that there is an intelligence at work—and merely objecting that the intelligence is producing low-quality output. The marketing has already won. The commenter just doesn’t know it.


And this is the heart of what I’ve come to understand.

There is a recursive loop operating here. It works like this:

The AI industry manufactures a misleading frame—”artificial intelligence”—and pours billions into getting it adopted as common usage. The frame is absorbed into everyday language, carrying with it the unexamined assumption that these systems possess something meaningfully resembling intelligence. Critics, reacting against the hype, deploy a meme derived from that frame: “AI slop.” But the meme reinforces the original frame, because it implicitly grants the intelligence attribution. It says: this intelligence made something bad. It does not say: this is a statistical pattern-matching system being marketed as intelligence. The meme, in other words, does the industry’s work for it. The critics become unwitting agents of the very marketing apparatus they believe they’re opposing.

This is not a metaphor. This is a structural description of how the phrase functions in practice. Every time someone types “AI slop” under a video—mine or anyone else’s—they are simultaneously (a) criticizing AI-generated content and (b) reinforcing the premise that artificial intelligence exists. They are paying the toll to cross a bridge they think they’re burning.

Neil Postman, who spent his career studying how technology reshapes the concepts we use to understand ourselves, saw this kind of thing coming. In Technopoly, he wrote that new technologies don’t just introduce new terms into the language—they “modify old words, words that have deep-rooted meanings.” Television changed what we meant by “news.” The computer changed what we meant by “information.” And now the AI industry is changing what we mean by “intelligence”—not through argument, but through repetition, through the sheer ubiquity of usage. Postman called this process “insidious and dangerous” because it happens without our being fully conscious of it. The word changes its meaning while we’re busy using it.


Step back from the recursive loop and look at who’s caught in it.

The commenter who types “AI slop” under my video is, in almost every case, someone who has not watched the video. Or who has watched it and ignored the argument. Or who has watched it and understood it and still can’t resist the gravitational pull of the meme. In all three cases, what they are doing is taking a piece of content that contains a nuanced, multi-layered analysis—sourced, argued, visually constructed with decades of professional skill—and compressing it into a false binary. Uses AI. Doesn’t use AI. That’s the entire taxonomy. That’s the start and finish of the ideological analysis, exactly as Lifton described.

And here is the irony—precise, structural, and worth stating plainly.

The commenters are the slop.

Not the images. The comment. The thoughtless, reductive, two-word meme deployed indiscriminately, without engagement, without evaluation, without original thought—that is the low-quality content being mass-produced and flooding the information environment.

The commenter has become an organic bot, performing exactly the function that an algorithmic bot would perform: flattening nuance into a binary, substituting a label for an argument, degrading the signal-to-noise ratio of the informational commons.

If you set out to design a system for corrupting public discourse, you could hardly do better than to engineer a meme that compels human beings to voluntarily behave like the worst version of the technology they think they’re criticizing.

And if you are such a commenter—if you are not a troll and not an algorithmic bot, if you are a real person typing those two words with your real hands—then I’d suggest the implications are worth sitting with.

Because it means the meme is using you.

You are not spreading it; it is spreading itself through you, in precisely the way Blackmore described: a replicator whose fitness has diverged from the welfare of its host.

You have been conscripted into the service of a phrase that benefits no one—not you, not the creator, not the discourse—while the marketing departments of the companies you think you’re opposing quietly collect the dividend of one more repetition of the word “intelligence.”


There’s one more thing I want to address, because any honest essay on this subject has to confront it.

The strongest version of the “AI slop” critique is not about image quality. It’s about extraction—the concern that AI-generated images, however skillfully produced, are created by systems trained on artists’ work without consent or compensation. This is a legitimate and serious argument. It deserves engagement, not dismissal.

But it’s not the argument being made in my comments section. My commenters aren’t saying “these images raise ethical questions about the training data used by generative AI systems.” They’re saying “AI slop.” They’ve replaced a genuine moral concern—one that requires thought, specificity, and a willingness to grapple with complexity—with a two-word stamp that communicates nothing except tribal affiliation.

The extraction argument deserves better advocates than a meme.

And there is something else worth noting about the “purity” position—the stance that a person making critical arguments about AI technology should refuse to use it entirely.

Think about what that would actually look like. It would mean that the people best positioned to evaluate these tools—the people with the deepest visual literacy, the most refined discrimination, the most experience in the craft of making images—would voluntarily exile themselves from the conversation.

The only people who’d be speaking from direct experience would be the uncritical enthusiasts. That’s not moral high ground. That’s unilateral disarmament.

I use these tools because I’m a critic of the mythology surrounding them. I use them the way an epidemiologist handles a virus: with gloves, with care, with an informed understanding of what they can and cannot do.

The idea that criticism requires abstinence—that you can’t understand something you’re also using—would have struck every empiricist from Francis Bacon to Richard Feynman as absurd.


Julian Whatley spent thirty-five years as a Hollywood cinematographer and advertising professional before launching The Julian Whatley Channel, where he produces media literacy content about perception engineering, AI criticism, and the manufactured mythologies of Silicon Valley. He lost his home in the 2025 Palisades Fire and is rebuilding in more ways than one.

Related articles