Strategies for playing at online casinosRead more...
The startup serves governments and enterprises - and has just scored $15M in Series A funding
The proliferation of generative AI applications today has its downsides, not the least of which is knowing when AI has been used. Cover letters and resumes, creative writing and dissertations, social media and news media are just a few sources we’d prefer to be made by humans in most cases. In at-scale applications, bots can cause harm with misinformation and propaganda aimed at influencing public opinion and swaying political focus. But for every action, there’s a reaction. Reality Defender is developing the solution, and it just received new capital to advance its AI recognition product.
Deepfake and AI-generated media detection platform Reality Defender just announced it scored $15 Million in Series A funding and released a new feature on its web application.
DCVC, a deep tech investor in Palo Alto, led the round. Oter participants were Comcast Ventures, ex/ante – a venture fund focused specifically on privacy and security tech, Partnership Fund for New York City, AI/ML investment firm Rackhouse Venture Capital in Menlo Park, and Nat Friedman's AI Grant.
Founded in 2021 by Ben Colman, Ali Shahriyari, and Gaurav Bharaj, Reality Defender is building a detection suite for enterprises to recognize video, audio, and image deepfakes. “Stop deepfakes before they become a problem” – proclaims the company website, and notes that its suite “adopts the latest models, as well as models that have yet to be adopted.”
DCVC’s Ali Tamaseb called the detection of fake content “an absolute civil necessity.”
One recent case of misinformation using bots was China spreading posts across the internet blaming the United States for the Maui wildfires. The country used a network of bot accounts claiming the mass disaster was caused by a secret “weather weapon” being tested by the United States. The messages were supplied with AI-generated photos, as reported by The New York Times. You have likely heard of other influence campaigns involving China and Russia targeting the United States and other nations at a time of war crises – and some advanced technology could be used for that purpose.
"While generative AI has already created massive productivity boosts for products and companies, it has also significantly reduced the cost for bad actors to create fake news, media, voice, and even fake organizations to target individuals, institutions, banks and whole societies," Tamaseb said in a release.
"In the face of this dire threat—a whole new cybersecurity category—Reality Defender's best-in-class technology is leading the delivery of an absolute civil necessity: the ability to distinguish between what's real and what isn't," he added.
Reality Defender serves some of the largest organizations and governments, according to a September company statement, to provide solutions for fighting against the destabilization of political world order. But the startup also noted that cases of “weaponized AI-generated media” increasingly occur on the individual level and involve the abuse of personal privacy. Reality Defender is working to provide a solution to battle these attacks.
On its website, the company lists among its services AI-generated text detection with 99.82% accuracy – from any platform, with the result given in seconds. It also offers fraud-fighting solutions like user verification with real-time voice scanning, as well as the detection of deepfakes in visual media, and protecting platforms against abusive content.
In a statement this week, Reality Defender also announced it has launched Explainable AI on its web platform for clients using Text Detection. The new feature allows users to detect AI-generated text in a scanned document.
Among the startup’s partners are Microsoft, Visa, NBC, and the NATO Strategic Communications Centre of Excellence. In fact, Reality Defender said it helped NATO STRATCOM identify deepfakes and misinformation spread by pro-Kremlin actors on Russian social media.
Currently, Reality Defender is expanding its team and working on adding Explainable AI for audio, video, and image detection, planned for release in the coming months. The company said that select clients already have access to real-time voice deepfake detection, which allows call centers and anti-fraud teams to immediately detect the presence of manipulated or fabricated media.
A few words about the founding team: they sure know how to recognize that AI intervention.
CEO Colman comes from Google and Goldman Sachs and is a serial cybersecurity entrepreneur and angel investor. CTO Shahriyari has 20 years of software development behind his belt; he served as VP at digital innovation company Originate and director of product at The AI Foundation. Lastly, Reality Defender’s Head of R&D Bharaj is also chief science officer at Flawless AI, which makes AI tools for videos like lip syncing. Previously, he was VP of generative science research at the AI Foundation, where he led the team creating virtual humans.
Images used in part from: Reality Defender, Rawpixel
Support VatorNews by Donating
Read more from our "Trends and news" series
The 5-year-old company is China's pioneer in the industryRead more...
Nurses can add qualifications and preferences to their profiles, and set preferences for recruitersRead more...