In a funding round led by DCVC and including Comcast Ventures, Ex/ante, Parameter Ventures, and Nat Friedman’s AI Grant, Reality Defender—one of many startups creating tools to try and identify deepfakes and other AI-generated content—has raised $15 million.
According to co-founder and CEO Ben Colman, the funds will double Reality Defender’s 23-person staff over the following year and enhance its AI content detection methods.
In an email interview with TechCrunch, Colman said that “new methods of deepfaking and content generation will consistently appear, taking the world by surprise both through spectacle and the amount of damage they can cause.” Reality Defender may keep many steps ahead of these next-generation approaches and models before they become widely known by having a research-forward mentality and being proactive about detection rather than reacting to what is already visible.
In 2021, Colman, a former vice president at Goldman Sachs, founded Reality Defender with Gaurav Bharaj and Ali Shahriyari. Shahriyari has previously held positions at the AI Foundation, a business that creates animated chatbots driven by AI, and Originate, a digital transformation tech consulting company. Shahriyari worked alongside Bharaj at the AI Foundation, where he oversaw R&D.
Initially, Reality Defender was a nonprofit. However, Colman claims that after realizing the extent of the deepfakes problem and the rising business need for deepfake-detecting technology, the team looked to outside funding.
Colman is not overstating the scope. Compared to the same period in 2022, DeepMedia, a competitor of Reality Defender focusing on synthetic media detection techniques, believes that this year, there have been three times as many voice deepfakes and eight times as many voice deepfakes posted online.
The commoditization of generative AI technologies is largely responsible for the increase in deepfakes.
Cloning a voice or producing a deepfake picture or video, or an image or video that has been digitally altered to convincingly substitute a person’s resemblance, used to be expensive and complicated, requiring data science expertise. However, over the last several years, malicious actors have launched deepfake campaigns at little to no expense because of platforms like the voice-synthesizing ElevenLabs and open-source models like Stable Diffusion, which creates visuals.
Just last month, 4chan members used a variety of generative AI technologies, such as Stable Diffusion, to flood the internet with racist pictures. Meanwhile, trolls have utilized ElevenLabs to mimic the sounds of famous people, creating audio with anything from vicious hate speech to erotica and memes. The Chinese Communist Party-affiliated state actors have also created lifelike AI avatars of news anchors that remark on issues like gun violence in the United States.
Certain generative AI platforms have added filters and other limitations to prevent exploitation. But the game is a cat-and-mouse, much like cybersecurity.
According to Colman, the usage and exploitation of deep-fake content on social media is one of the biggest risks associated with AI-generated media. Because there is no law mandating these platforms to search for deepfakes, unlike laws requiring them to delete child sex abuse and other unlawful items, they have no motivation to do so.
With an API and web interface that examines movies, audio, text, and photos for indications of AI-driven alterations, Reality Defender claims to be able to identify a variety of deep fakes and AI-generated material. Colman asserts that Reality Defender can form its rivals regarding deep fake accuracy rate because of the age of “proprietary models” trained on internal data sets and “created to work in the real world and not in the lab,”
Colman said they train a group of deep learning detection models, each focusing on a different approach. “We learned long ago that accuracy testing in a lab versus accuracy in the real world does not work, nor does the single-model, monomodal approach,”
However, can any technology accurately identify deepfakes? That is a debatable issue.
The AI business OpenAI, which created the well-known ChatGPT AI-powered chatbot, recently withdrew its tool to identify AI-generated text due to its “low accuracy rate.” At least one research proves that, depending on how the deepfakes supplied to them are manipulated, deepfake video detectors can be tricked.
Deepfake detection techniques run the danger of increasing biases.
Researchers at the University of Southern California discovered in a report published in 2021 that some of the data sets used to train deepfake detection algorithms may have an underrepresentation of persons with a certain gender or skin tone. The coauthors noted that deepfake detectors can amplify this bias, with some detecting up to a 10.7% difference in mistake rate based on race group.
Colman supports the integrity of Reality Defender. He also claims that the business actively strives to reduce biases in its algorithms by including “a wide variety accents, skin colors, and other varied data” in its detector training data sets.
“We’re always training, retraining, and improving our detector models so they fit new scenarios and use cases, all while accurately representing the real world and not just a small subset of data or individuals,” added Colman.
Call me cynical, but without an independent audit to support them, I’m unsure if I believe such assertions. However, Colman informs me that Reality Defender’s business, which he claims is fairly brisk, is unaffected by my doubts. Governments “across several continents” as well as “top-tier” financial institutions, media conglomerates, and multinationals are among Reality Defender’s clients.
That’s despite deepfake detection techniques from established companies like Microsoft and competition from startups like Truepic, Sentinel, and Effectiv.
Reality Defender is to launch an “explainable AI” tool that would allow users to scan a document to view color-coded paragraphs of AI-generated text to preserve its position in the deepfake detection software industry, which, according to HSRC, will be worth $3.86 billion in 2020. Real-time speech deepfake identification for contact centers is also on the horizon and will be followed by a real-time video detection tool.
Reality Defender will, in essence, safeguard a company’s reputation and financial health, according to Colman. “Reality Defender employs AI to combat AI, assisting the biggest platforms, governments, and entities in determining if a piece of information is authentic or distorted. Just to mention three out of hundreds of application cases, this aids in the fight against financial fraud, the prevention of misinformation spreading through media outlets, and the prevention of irreversible and destructive items reaching the government.