YouTube’s stance on generative AI has always been a bit of a balancing act. The platform allows creators to use AI tools and, in some cases, even encourages experimentation to speed up editing, generate graphics, or polish production. At the same time, YouTube has publicly promised to crack down on what many viewers now call “AI slop,” a wave of low-effort, mass-produced videos that feel repetitive, misleading, or poorly assembled.
Now, YouTube appears to be testing a new way to spot that kind of content: asking viewers directly.
Some users are seeing pop-up prompts in the YouTube app that essentially ask them to judge whether what they just watched “feels like AI slop,” or whether “low-quality AI” played a role. The answers run on a sliding scale, from “Not at all” to “Extremely.” So far, it looks like this feature is only appearing for a limited number of people, suggesting YouTube is still experimenting before rolling it out more broadly.
The move highlights a growing problem for the platform. Using generative AI isn’t against the rules, and creators don’t have to manually do every voice-over, edit, or design element to remain in good standing. But a rising number of uploads appear to be produced with minimal human oversight, resulting in videos that are repetitive, vague, stitched together from borrowed ideas, or designed mainly to farm views. In many cases, these videos can remain online unless they cross an obvious line into policy violations or are deemed “low quality” in ways that can impact monetization.
So how does YouTube normally filter out low-quality AI content? The platform typically depends on a mix of automated systems and human reviewers to determine whether videos meet basic standards. But neither approach has been completely effective, especially at scale. One recent study claimed that more than 20% of YouTube Shorts showed signs of being poorly produced, repetitive, or misleading—exactly the type of content viewers often associate with AI slop. That kind of volume makes it easy to see why YouTube might try adding a new feedback signal beyond the traditional like and dislike buttons.
Still, relying on viewer input comes with real drawbacks. Not everyone can confidently identify sophisticated deepfakes or AI-generated narration, particularly as synthetic voices and visuals become harder to spot. There’s also the human element: fans of a channel may be reluctant to label a creator’s content as low-quality, even if it feels automated. In other cases, viewers could misuse the tool to pile on competitors or creators they simply don’t like.
And some critics argue the biggest concern is what happens if this rating system expands. If millions of people start answering questions about what looks “AI-made” or “low-quality,” YouTube could end up collecting a massive dataset on what viewers do and don’t detect. Skeptics suggest that kind of information might not just help remove low-effort content—it could also be used to improve automated generation techniques, potentially leading to AI-driven videos that are more convincing, more engaging, and much harder for audiences to identify.
For now, the key detail is that YouTube appears to be testing a viewer-driven “AI slop” detection prompt inside its app. Whether this becomes an effective way to reduce low-quality, machine-made uploads—or whether it creates new problems—will depend on how YouTube uses that feedback and how transparently it explains what the ratings actually influence.






