Growing frustrations with AI on social media are fueling demands for clearer solutions.
Seeing is no longer believing in the age of artificial intelligence.
Anyone who has spent time scrolling through social media recently has likely noticed how deeply AI-generated content has embedded itself into everyday feeds. From polished but emotionless images to strange videos and text that sounds convincing on the surface, AI is everywhere — and many people are struggling to tell what’s real.
According to an exclusive CNET survey, 94% of U.S. adults who use social media believe they encounter content that was created or altered by AI. Yet fewer than half — just 44% — say they feel confident distinguishing real photos and videos from those generated by AI. That growing disconnect is fueling frustration and concern about trust online.
AI-generated “slop” has spread across nearly every platform, creating confusion and skepticism among users. While many people sense that AI content surrounds them, accurately identifying it remains a challenge. As a result, users are increasingly calling for solutions that go beyond relying on instinct or visual inspection alone.
Among the 2,443 survey respondents who use social media, more than half (51%) said better labeling of AI-generated or edited content is necessary. Others took a harder stance: 21% believe AI-generated content should be completely banned from social media platforms. Meanwhile, only 11% of respondents said they find AI content useful, informative, or entertaining.
Despite widespread discomfort, AI is not disappearing. It continues to reshape the internet — and how people relate to it — at a rapid pace. The survey findings suggest that while awareness of AI is high, society is still struggling to come to terms with its implications.
Key findings
- Most U.S. adults who use social media (94%) believe they encounter AI-generated or AI-altered content, but far fewer (44%) feel confident telling real images and videos apart from fake ones.
- Nearly three-quarters (72%) say they take steps to verify whether content is real, though inaction is more common among Boomers (36%) and Gen X users (29%).
- Half of U.S. adults (51%) want stronger labeling of AI-generated or edited content.
- One in five (21%) believe AI content should not be allowed on social media under any circumstances.
US adults lack confidence in spotting AI media
In today’s digital landscape, visual evidence no longer guarantees authenticity. Advanced tools such as OpenAI’s Sora video generator and Google’s Nano Banana image model can produce hyperrealistic visuals, while chatbots generate text that reads as though it were written by a human. These capabilities have blurred the line between genuine and fabricated content.
As a result, 25% of U.S. adults say they are not confident in their ability to tell real images and videos from AI-generated ones. Older generations expressed the greatest uncertainty, with 40% of Boomers and 28% of Gen X respondents reporting low confidence. Limited exposure to AI tools and lower familiarity with the technology may contribute to this discomfort.
How people try to verify content
Given AI’s ability to convincingly imitate reality, verifying online content has become more important than ever. Nearly three in four U.S. adults (72%) say they take some form of action when they suspect an image or video might be fake. Gen Z leads in this effort, with 84% reporting that they actively verify questionable content.
The most common approach is closely inspecting images or videos for visual errors or inconsistencies — a tactic used by 60% of respondents. However, as AI models improve, traditional giveaways such as distorted hands, extra fingers, or continuity issues have become far less common, making visual inspection less reliable.
As these cues fade, other verification methods are gaining importance. About 30% of respondents check for labels or disclosures indicating AI involvement, while 25% search for the content elsewhere online, such as through news outlets or reverse image searches. Only a small fraction — 5% — say they use dedicated deepfake detection tools or websites.
Still, 25% of U.S. adults take no action at all to verify suspicious content. This figure is highest among Boomers (36%) and Gen X users (29), raising concerns given AI’s increasing use in scams, misinformation, and fraud. Understanding where content comes from is becoming a basic requirement for navigating the modern internet.
Growing demand for better AI labels
Among proposed solutions, labeling has emerged as a major point of consensus. Labeling typically depends on creators disclosing AI use, though platforms can also attempt detection themselves — a process that remains inconsistent and technically challenging. These limitations may explain why 51% of U.S. adults believe AI-generated and edited content needs clearer, more reliable labels.
Support for stronger labeling is highest among younger users, with 56% of Millennials and 55% of Gen Z backing improved disclosures. At the same time, only 11% of respondents said they find AI content entertaining, informative, or useful.
Regulating — or banning — AI content
Social media platforms currently allow AI-generated content as long as it complies with general content rules. Some companies have begun experimenting with tools to give users more control over how much AI content appears in their feeds. Pinterest introduced filters last year, while TikTok continues to test similar features.
Despite these efforts, skepticism remains strong. One in five respondents (21%) believe AI content should be completely prohibited on social media, with Gen Z showing the highest support for a total ban at 25%. Another 36% said AI content should be allowed but subject to strict regulation. These views likely reflect the fact that 28% of respondents believe AI content offers little to no value.
How to limit AI content and spot deepfakes
Staying vigilant remains one of the strongest defenses against AI deception. Content that seems overly polished, strange, or too good to be true often deserves extra scrutiny. Users can also turn to deepfake detection tools, such as those offered by the Content Authenticity Initiative, which support multiple file types.
Checking the account that shared the content can also reveal warning signs. Accounts that post large volumes of unrelated or bizarre content, follow few or no other users, or share spam-like links often signal AI-driven or fraudulent activity.
To reduce AI content exposure, users can adjust platform settings — such as muting Meta AI on Instagram and Facebook or filtering AI posts on Pinterest. Outside social media, AI features can be disabled on devices and services such as Apple Intelligence and Google’s AI tools across Search, Gmail, and Docs.
Even with precautions, being fooled by AI is sometimes unavoidable. As AI systems grow more sophisticated, mistakes are inevitable. Until universal detection systems are in place, users must rely on available tools — and on educating one another — to navigate an increasingly artificial online world.









































