The rise of artificial intelligence has brought a wave of innovation to digital entertainment, with AI-generated deepfake videos becoming a major trend. In the aftermath of the Kansas City Chiefs’ loss to the Philadelphia Eagles in the Super Bowl on February 9, 2025, a highly realistic deepfake video of quarterback Patrick Mahomes surfaced on social media. Created by the TikTok account ScaryAI, the video was so convincing that many fans initially believed it to be real.
ScaryAI has gained popularity for its AI-generated parody videos, particularly fake press conferences featuring top athletes like Mahomes, Steph Curry, and LeBron James. Their latest creation, a fabricated post-game speech from Mahomes, used advanced artificial intelligence to closely mimic his facial expressions and voice. The resemblance to Mahomes’ actual post-game conference, where he took responsibility for the Chiefs’ defeat, made the deepfake even more believable.
The video quickly gained traction, accumulating over 6.4 million views and 458,000 likes within just 24 hours. While ScaryAI included a small disclaimer noting the video was AI-generated, the size and placement made it easy to miss. This led to some viewers mistaking it for an authentic press conference, further demonstrating how easily deepfake content can blur the lines between reality and fabrication.
ScaryAI has also been promoting Parrot AI, a platform that enables users to create similar AI-generated videos. To encourage participation, Parrot AI offers cash incentives, including $100 for videos that surpass one million views and a $5,000 prize for the most viral video of the month. This monetization model is pushing the boundaries of AI-driven content creation, making deepfakes more accessible to the average user.
While AI-generated sports deepfakes are largely seen as entertainment, experts caution that this technology presents significant risks. The ability to create highly convincing fabricated footage opens the door to misinformation, particularly in politically sensitive situations. For example, a deepfake of Russian President Vladimir Putin recently circulated online, falsely depicting him making controversial statements. Similarly, the U.S. government has sanctioned Iran and Russia for allegedly using AI-generated content to spread election-related disinformation.
Social media companies are also grappling with how to handle AI-generated content. For instance, Meta faced criticism for allowing a misleading deepfake of President Joe Biden to remain on its platform. In response to these growing concerns, some governments have begun implementing regulations. California has enacted laws banning election-related deepfakes on social media, though this has sparked debate, with figures like Elon Musk arguing that such restrictions could stifle creativity and satire.
As AI technology continues to develop, the intersection of deepfake entertainment and viral content monetization will raise ethical concerns. Although AI-generated videos are becoming a staple in digital entertainment, regulators and tech companies must find a way to balance creative expression with the need to prevent misinformation.
For now, Patrick Mahomes’ AI deepfake serves as both a fascinating glimpse into the future of digital media and a reminder of the challenges that lie ahead. The rapid spread of synthetic content highlights the need for clear policies to ensure that AI remains a tool for innovation rather than deception. The question remains: how can society embrace AI’s creative potential while maintaining trust and authenticity in digital content?
