We use cookies

    We use cookies to enhance your browsing experience, analyze our traffic, and provide personalized content. By clicking "Accept All", you consent to our use of cookies. You can manage your preferences or learn more in our Cookie Policy.

    January 2026
    15 min read
    By Kristof Leroux, Founder & CEO
    Technology
    New

    The Rise and Impact of "AI Slop": How Low-Quality AI Content Is Flooding the Internet

    AI
    AI Slop
    Misinformation
    Content Quality
    Digital Publishing

    Introduction

    In the past few years, "AI slop" has emerged as a term for the glut of low-effort, spammy content produced by generative artificial intelligence. From bogus news articles and formulaic blog posts to deepfake images and fake reviews, this avalanche of AI-generated material is raising alarms about quality, trust, and authenticity online. This report explores what AI slop means and where the term came from, how generative AI has fueled a surge in low-quality content, and the real impacts across industries – including digital publishing, education, social media, and e-commerce. We will examine case studies from 2023–2024, reactions from industry leaders and academics, the ethical and economic concerns at stake, and potential solutions (from detection tools to content verification standards) aimed at preserving content quality and authenticity.

    What Is "AI Slop"? Definition and Origins

    AI slop refers to digital content created with generative AI that is low in quality, meaning, or originality, yet produced in overwhelming volume. The term is pejorative – akin to calling something "spam" – and implies that the content was churned out mindlessly in pursuit of clicks or monetization rather than to inform or enlighten. Merriam-Webster's dictionary, which selected "slop" as its 2025 Word of the Year, defines it as "digital content of low quality that is produced usually in quantity by means of artificial intelligence". In other words, slop prioritizes quantity over substance, flooding the same channels we use for news, learning, or shopping with clutter and filler. As one commentary puts it, "slop is content created with the intent of being mass-distributed rather than being of use to anyone".

    The origins of the term trace back to the early 2020s, as AI image generators and large language models (LLMs) began enabling high-volume output of text and images with minimal effort. On tech forums and social platforms, people groped for a label to describe the onslaught of auto-generated junk – phrases like "AI garbage," "AI pollution," and "AI-generated dross" were floated. "Slop" caught on as in-group slang by 2022, seen in communities like 4chan and Hacker News, to mock the low-grade AI material flooding feeds. The term gained mainstream traction in 2024, thanks in part to developer Simon Willison, who championed "AI slop" on his blog (noting the term was already in underground use). By late 2024, media headlines were widely criticizing the "large quantities of slop on the internet", and the label slop had entered popular vocabulary. It encapsulates the growing sense that AI output – when cranked out en masse without care – is a kind of digital "clutter" or "filler content" clogging our information channels.

    The Generative AI Boom and the Content Deluge

    The rapid rise of generative AI tools (like OpenAI's GPT series for text and image models like Stable Diffusion) has drastically lowered the cost and effort of producing content, leading to an unprecedented deluge. Creating a written article, illustration, or even video no longer requires specialized skill or time – an AI can spit out passable (if bland) text or images in seconds. This means anyone can generate content at scale, and many have seized that opportunity. The result is a "flood of low-quality media" pouring onto the internet. Critics note that the cost for spammers and content farms to produce slop has collapsed to near zero, while the cost to the rest of us (in terms of filtering out junk, doubting what we see, and straining our attention) is high. The incentive structures of the web – from ad revenue to social media virality – further fuel this surge.

    Generative AI has effectively supercharged traditional spam and clickbait. By mid-2023, observers were already witnessing an explosion of AI-generated web content. For example, NewsGuard (a firm tracking online misinformation) identified 49 news sites across the world that appeared to be "almost entirely written by artificial intelligence software." These sites were posting hundreds of articles per day with "bland language and repetitive phrases," often rife with errors. In one case, a fake news site even published an article claiming the U.S. president had died, which included an AI model's apology text right in the story ("I cannot complete this prompt…") – a sure sign it was machine-written.

    It's important to note that not all AI-generated material is slop. AI can assist in producing high-quality, creative, or useful content when guided by skilled humans. But when content is mindlessly generated and thrust upon audiences who didn't ask for it, "slop" is the perfect term. The defining issue of this trend is volume over value: a trickle of mediocre AI posts might be harmless, but a firehose of them changes the character of our information environment.

    Impact Across Industries and Platforms

    AI slop has infiltrated a wide range of online industries and platforms. Below, we examine its impact on digital publishing, education, social media, and e-commerce, along with illustrative cases in each.

    Digital Publishing and News Media

    Online publishing was among the first domains hit by the AI content tsunami. News and information websites – especially those driven by advertising revenue – saw an opportunity to crank out countless AI-written articles on the cheap. This gave rise to AI-generated content farms pretending to be news outlets. By May 2023, NewsGuard reported dozens of such sites with names like "CelebritiesDeaths.com" and "GetIntoKnowledge.com" popping up. These sites would scrape headlines or trending topics and have an AI model generate cookie-cutter articles, often with odd phrasing and factual errors. One notorious example was a piece falsely announcing President Biden's death, complete with the awkward disclaimer "I'm sorry, I cannot complete this prompt…" embedded in the text – evidently the output of a chatbot that refused to continue the hoax.

    Such AI-written news sites often produce hundreds of articles per day, outpacing what any human newsroom could. The content is usually aggregated or plagiarized facts woven into mediocre prose. Not only can this mislead readers, but it also threatens to drown out reliable journalism. Observers worry about a "vicious cycle" where AI slop sites add to the pool of misinformation, which then might get scraped as training data by future AI models, causing them to generate even more fake news.

    Legitimate media organizations have also experimented (cautiously) with generative AI, and found it can backfire if not carefully edited. In early 2023, tech news site CNET tried publishing AI-written financial explainers; within weeks, more than half of those articles had to be corrected for errors or plagiarism.

    Books and Online Publishing Platforms

    Book publishing – especially self-publishing platforms like Amazon's Kindle Direct Publishing (KDP) – has been inundated by AI-generated material as well. With ChatGPT's public release in late 2022, aspiring writers discovered they could prompt the AI to write books for them in hours. By February 2023, over 200 e-books on Amazon listed ChatGPT as an author or co-author, ranging from how-to guides to children's stories. The number was "rising daily" as more people jumped on the trend.

    Established authors and industry groups reacted swiftly. "These books will flood the market and a lot of authors are going to be out of work," warned Mary Rasenberger, executive director of the Authors Guild. She noted that while ghostwriting has long existed, AI automation threatens to turn book writing from a craft into a commodity.

    Faced with an influx of suspect content, Amazon took action. In September 2023, the company limited KDP self-publishers to 3 new books per day, citing the need to "protect against abuse" after a spike in AI-generated titles.

    Education and Academia

    The education sector has been hit with a wave of AI-generated assignments and content, raising academic integrity issues and quality concerns. With tools like ChatGPT, students quickly realized they could generate passable essays, homework answers, and even lab reports with minimal effort. By 2023, surveys suggested a large percentage of students had at least tried AI for schoolwork, and educators were struggling to detect AI-written submissions.

    In a high-profile case at the University of Illinois, two instructors discovered over 100 students in a large class had used AI to cheat on attendance and even to write their apologies when caught. The professors were initially moved by the seemingly heartfelt email apologies – until they noticed 80% of them were nearly identical, all containing the phrase "sincerely apologize," clearly produced by ChatGPT or a similar tool.

    Teaching assistants report that the problem goes beyond attendance. "It's insane how pervasive AI slop is in 75% of the turned-in work," one TA noted, describing how many students now submit assignments that are obviously AI-generated.

    Social Media and Content Platforms

    Social media platforms have become hotspots for AI-generated "slop" content, thanks to their enormous audiences and monetization models that reward engagement. On networks like Facebook, Instagram, TikTok, and YouTube, one can find countless examples of cheaply generated posts designed to grab attention – often with absurd, uncanny, or misleading material.

    For instance, early in 2024, Facebook saw a bizarre meme trend of AI-generated images dubbed "Shrimp Jesus" (a distorted statue image covered in shrimp) flooding groups and feeds. These surreal images were cranked out and duplicated en masse, not because they had artistic merit, but because they were weird enough to get users to share or react.

    Video platforms are equally affected. On YouTube, researchers found a substantial share of popular new videos are basically AI-assembled compilations – such as text from Wikipedia read aloud by a synthetic voice over stock or AI-generated footage. One study found more than 20% of videos shown to new YouTube users were AI slop – low-quality, algorithm-churned clips designed to farm views.

    E-Commerce and Online Reviews

    In e-commerce, AI slop has manifested notably in the realm of product reviews and listings. Online shoppers rely heavily on customer reviews to decide what to buy, and unfortunately AI has given unscrupulous sellers a new tool to generate fake positive reviews at scale.

    Studies in 2024 uncovered signs of AI-generated reviews proliferating on Amazon and other marketplaces. For instance, a data analysis by Pangram Labs examined 30,000 Amazon reviews and found about 5% were likely written by AI, especially in popular categories like electronics and beauty products. Tellingly, 74% of those AI-written reviews were 5-star ratings – a red flag that they were aimed at unduly boosting product reputations.

    "The proliferation of AI-generated reviews has the potential to break trust in the customer review system once and for all," warned Max Spero, CEO of Pangram Labs. Consumers may become cynical, assuming glowing reviews are fake, which hurts honest businesses as well.

    Case Studies: AI Slop in 2023–2024

    To illustrate the rise of AI slop, here are a few notable examples and incidents from 2023 and 2024:

    AI-Generated News Farms (2023): By spring 2023, at least 49 online news sites were identified as being "almost entirely written by AI", publishing hundreds of articles daily. One such site falsely announced President Biden's death with an AI-written article, even including the chatbot's refusal text in the story.

    Clarkesworld Magazine Submissions (Feb 2023): The sci-fi magazine Clarkesworld had to shut down story submissions after a flood of AI-generated short stories overwhelmed the editors. The editor noted the spam came largely from outsiders hoping to make quick money, and warned this could raise barriers for genuine new authors.

    Amazon Kindle Book Flood (2023): Following ChatGPT's release, self-published e-books authored by AI proliferated on Amazon. By February, over 200 titles listed ChatGPT as a co-author. In one week, author Jane Friedman found 29 AI-written books falsely using her name on Amazon, leading Amazon to remove them and later impose a limit of 3 new books per day for self-publishers.

    Fake Pentagon Explosion Image (May 2023): An AI-generated photo of an explosion near the U.S. Pentagon was circulated by a verified Twitter account, sparking brief public panic. The image, which was fake, spread rapidly via social media and even caused a short-lived dip in stock markets before being debunked.

    Hurricane Misinformation (Oct 2024): During Hurricane season in late 2024, people seeking emergency information encountered "a sea of AI-generated junk articles" and fake updates on social media. One viral image of a child and dog stranded in a flood turned out to be AI-generated, leaving users unsure what to trust.

    These cases underscore the rapid spread of AI slop across domains and the challenges it poses.

    Reactions from Leaders, Experts, and the Public

    Industry Leaders

    Some tech executives have bristled at the term "slop" and the negative press around AI content. Notably, Microsoft CEO Satya Nadella recently urged people to "get beyond the arguments of slop vs sophistication." In a late-2025 essay, Nadella acknowledged the explosion of low-quality AI content but suggested we must accept AI as part of a "new equilibrium" and focus on its positive potential.

    Other industry figures have been more contrite. Executives at platforms deluged by slop have started to acknowledge the problem. YouTube, for instance, in late 2024/early 2025 began publicly affirming its efforts to tackle "inauthentic and low-quality" content. Amazon responded to authors' and customers' complaints by implementing new rules.

    Journalists and Academics

    Writers and scholars have been at the forefront of dissecting AI slop's implications. A common sentiment is alarm at the degradation of the information environment. Researchers have begun studying phenomena like "model collapse," which is what happens if AI systems train on data that is itself AI-generated (and low-quality). This is a profound long-term risk: if a large portion of online content turns to slop, future AI models (and even human learners) training on that content will yield progressively worse output, creating a downward spiral in quality.

    Public Sentiment

    Among the general public and content creators, there has been a palpable frustration as AI slop floods popular sites. This is evidenced by the viral discussions on forums and social media where users lament "the death of the internet as we knew it." Many readers and viewers feel that the authenticity of online interaction is eroding – they are no longer sure if that TikTok video or Amazon review was made by a real person or just cranked out by a bot.

    Ethical and Economic Concerns

    Misinformation and Deception

    There is an ethical dimension of public concern: Misinformation and deception are top worries. People fear that AI slop isn't just annoying, but can be used maliciously – to generate fake news, fake personas, scam content, deepfake videos – blurring reality. A survey of consumers might find trust in online content at an all-time low.

    Economic and Creative Concerns

    Ethically, there's concern about AI slop devaluing human creativity and labor. Journalists and writers worry that if clickbait farms can produce 100 articles a day with AI, it could undercut professional writing and journalism jobs – while flooding readers with trivial content. Musicians faced a similar issue in 2023–24 with AI-generated songs being uploaded by the thousands to streaming platforms.

    Environmental Concerns

    Finally, a less-discussed but significant concern is the environmental cost of all this slop. Generating endless streams of AI content is not free from a resource standpoint – large AI models require substantial computational power (and electricity) to operate. One could argue that wasting compute on generating spammy content (and then more compute on filtering it, etc.) is an ecological waste.

    Potential Solutions: Detection, Verification, and Quality Control

    Addressing the AI slop problem is challenging, but a variety of solutions and countermeasures are being explored. These range from technical tools to social and regulatory approaches aimed at detecting low-quality AI content, verifying authenticity, and reducing the incentives to produce slop.

    1. AI-Detection Tools

    A straightforward approach is developing better algorithms to spot AI-generated content. Researchers have created tools that analyze text for the statistical fingerprints of AI (such as overly consistent syntax or certain entropy patterns). Early entrants like GPTZero in 2023 made headlines by claiming to detect AI-written essays, and established plagiarism checkers (e.g. Turnitin) integrated AI-detection features.

    2. Watermarking AI Outputs

    One promising technical solution is to "watermark" content at the time of AI creation. This involves subtly embedding signals in AI-generated text, images, or audio that are invisible to humans but can be algorithmically detected. For example, Google's DeepMind introduced a tool called SynthID that can watermark AI-generated images at the pixel level, making it possible to later tell if an image came from their model.

    3. Content Verification and Provenance

    Building on watermarking, there's a push for authenticity infrastructure on the internet. This means ensuring that original, human-created content can be verified (e.g., cryptographically signed or logged on a blockchain-like ledger), and that any alterations or AI involvement are disclosed. Projects like Content Authenticity Initiative (CAI) have gathered thousands of members by 2025 to promote this vision.

    4. Platform Policies and Moderation

    Another crucial component is platform-level action to discourage slop. YouTube has started banning channels that egregiously spam AI-generated content. Amazon imposed limits and disclosure rules on AI-published books. Facebook and TikTok have rules requiring certain AI-generated influencer content or deepfakes to be labeled in some jurisdictions. Platforms are enhancing their spam detection algorithms specifically to catch AI patterns.

    5. Regulatory and Policy Measures

    Governments and regulators are also eyeing solutions. In the EU, the upcoming AI Act includes provisions that generative AI content (especially deepfakes) must be clearly labeled when published. Regulators like the FTC in the US have signaled they will treat undisclosed AI endorsements or reviews as deceptive advertising.

    6. Restoring Incentives for Quality

    A broader, less tangible solution is changing the incentive structure that rewards slop. This might mean, for example, adjusting ad revenue models so that clickbait farms don't profit as easily (perhaps requiring higher content standards for monetization). It could also involve supporting quality journalism and content through subscriptions or public funding to reduce reliance on click-driven revenue.

    7. Embracing AI in Positive Ways

    Interestingly, part of the solution may lie in using AI itself more thoughtfully. AI can be a tool for good content – e.g., helping writers draft better prose which a human then refines (thus increasing quality, not just quantity). If best practices evolve (say, always have a human in the loop for fact-checking AI output), the average quality of AI-assisted content could improve.

    Conclusion

    The emergence of "AI slop" marks a new chapter in the ongoing saga of technology's impact on information. On one hand, generative AI represents a leap forward in productivity and creativity; on the other hand, it has unleashed a torrent of dross that threatens to swamp the value of that very creativity. We have seen how quickly low-quality AI content has permeated everything from news and books to social media, education, and e-commerce. The impact is multifaceted – it challenges how we discern truth online, how we value human work, and how we maintain trust in digital systems.

    Crucially, history suggests that every information revolution eventually finds its balance. Just as the printing press flooded the world with both trashy pamphlets and great literature, AI will produce a lot of rubbish and some gems. The task ahead is to filter and elevate – to not let the junk drown out the genuine. As a Scientific American writer noted, "mass-made culture has been forgettable" in large part, but it also makes the originals shine brighter. Likewise, the concept of "slop" helps us label and dismiss the junk, so we can better champion the content that truly adds value.

    In the end, maintaining a healthy digital ecosystem will require vigilance and innovation: embracing the useful applications of AI while staunchly guarding against the excess of spam and fakery. By doing so, we ensure that the knowledge economy – our digital commons – does not devolve into a wasteland of slop, but remains a place where authentic information and human creativity can thrive.

    References

    1. AI slop - Wikipedia: https://en.wikipedia.org/wiki/AI_slop 2. AI-Generated Reviews Threaten Trust in Amazon's Beauty Category: https://beautymatter.com/articles/ai-generated-content-raises-concerns-over-authenticity-of-amazon-beauty-reviews 3. Microsoft CEO Begs Users to Stop Calling It "Slop": https://futurism.com/artificial-intelligence/microsoft-satya-nadella-ai-slop 4. AI Slop Explained: Why Low-Quality AI Content Is Everywhere: https://medium.com/@genai.works/ai-slop-explained-why-low-quality-ai-content-is-everywhere 5. AI Slop: How AI-Generated Content is Impacting Information Discovery: https://www.searchstax.com/blog/ai-slop-and-information-discovery/ 6. AI Slop—How Every Media Revolution Breeds Rubbish and Art: https://www.scientificamerican.com/article/ai-slop-how-every-media-revolution-breeds-rubbish-and-art/ 7. Nearly 50 news websites are 'AI-generated', a study says: https://www.theguardian.com/technology/2023/may/08/ai-generated-news-websites-study 8. More than 20% of videos shown to new YouTube users are 'AI slop', study finds: https://www.theguardian.com/technology/2025/dec/27/more-than-20-of-videos-shown-to-new-youtube-users-are-ai-slop-study-finds 9. CNET found errors in more than half of its AI-written stories: https://www.theverge.com/2023/1/25/23571082/cnet-ai-written-stories-errors-corrections-red-ventures 10. ChatGPT launches boom in AI-written e-books on Amazon: https://www.reuters.com/technology/chatgpt-launches-boom-ai-written-e-books-amazon-2023-02-21/ 11. Amazon restricts authors from self-publishing more than three books a day after AI concerns: https://www.theguardian.com/books/2023/sep/20/amazon-restricts-authors-from-self-publishing-more-than-three-books-a-day-after-ai-concerns 12. Inside the university AI cheating crisis: https://www.theguardian.com/technology/2024/dec/15/inside-the-university-ai-cheating-crisis 13. YouTube Now Shutting Down Channels Posting AI Slop: https://futurism.com/artificial-intelligence/youtube-shutting-down-ai-slop-channels 14. Fake AI-generated image of explosion at the Pentagon goes viral: https://www.youtube.com/watch?v=3AZpgY32tBY 15. SynthID - Google DeepMind: https://deepmind.google/models/synthid/ 16. Content Authenticity Initiative: https://contentauthenticity.org/blog/5000-members-building-momentum-for-a-more-trustworthy-digital-world

    Need help with your valuations?

    Contact our team of experts to discuss your business valuation needs.