Detecting AI fakes amid authenticity crisis

Anna Vod · November 15, 2023 · Short URL: https://vator.tv/n/576d

There is a notion that humanity released AI for public use way before it was ready for it

Even before the sudden rise of generative artificial intelligence, disinformation and fake campaigns had been used as weapons threatening the world order. Now generative AI, paired with the smartphone rooted in hand, makes the spread of fake news much cheaper and much faster.

“Is anything still true?” asks The Wall Street Journal’s Christopher Mims in an article exploring how to discern true from false when our own mobile phones can produce photos and audio that seem authentic. The spread of fake content ultimately leads to the fall of institutions, even those most expert, Mims writes. And all the while, big corporations driven by competition are touting and making widely accessible that AI-made image.

Over the past couple of weeks, Adobe Stock came under fire after AI-generated images it sold were used by various news media without being labeled as such. The images depicted clouds of black smoke rising over Gaza after an explosion. 

VentureBeat reported on the matter, citing a response from an Adobe Stock spokesperson saying its marketplace requires proper labeling for machine-made content and that it’s working with stakeholders in its fight against misinformation.

Backing up a bit, generative AI was able to create these images because this technology borrows patterns from real content to make totally new high-quality content. This means you can upload a video of your boss and have him twerk in the office or have him say things he would never have said.

This couldn't have happened with traditional AI models, which could only return analysis-based expectations and refine existing operations. The difference between generative and traditional AI is that the former can generate, or create, something synthetic that didn't exist previously.

Currently, Adobe is working on “content credentials” technology, which traces the authorship and alterations done to a photo; this info would be saved and accessible to future users under a special symbol.

“You don’t need to believe everything, right?” Adobe’s chief trust officer, Dana Rao, told the WSJ in early November. “I’ll see that symbol, I’ll be able to click on it, and be able to decide for myself whether or not to believe it or not. It’s a chain of trust built from the very first place the image is captured to where it gets published.”

Obviously, this does not resolve the concern over the authenticity of anything. Especially for those consuming quick info without stopping to tap on a credentials symbol.

Less than a year after the release of ChatGPT for public use, the abundance of generative AI across the board, in both textual and visual forms, became troubling. As an innovative technology observer for Vator, I come across perpetually increasing uses of the machine being deployed by startups into all kinds of applications – and far from all of them address the risks.

Meanwhile, as generative AI-powered technology startups multiply, others catch the wave going the opposite way – the detection of deepfakes. I recently wrote about Reality Defender, which is doing just that – recognizing that AI touch in documents, audio, and video for enterprises and government agencies. There are others, of course, like GPTZero for text, which offers a free version available for individual use. For detecting AI in images, there is AI or Not, also with a limited free option.

Alarmingly, this market in turn paves way for technologies bypassing AI detection. I didn’t expect to come across this solution when I first searched for it (not too tech-savvy myself, I’m afraid), but it’s out there, advertising 100% success.

Among these services is Phrasly. It offers to “humanize AI content” to bypass several detectors, including the aforementioned GPTZero, and seems to target students and writers. It’s also attracted some big partners, including Alphabet, Oracle, Walmart, and Amazon, according to its website. Naturally, there are other similar platforms of the sort, like Conch, Aiseo, and WriteHuman. No promo intended.

I also stumbled upon a recent blog highlighting the importance of bypassing AI detectors, discreetly authored by the AIContentfy team. The team’s key point: exercising freedom of speech by evading being “erroneously” blocked on social media. “This is especially crucial in situations where individuals or groups are sharing legitimate information or expressing unpopular opinions that may be at odds with prevailing norms or political ideologies,” the blog proclaims and goes on to list some techniques, accordingly.

If you ask me, I don’t see why personal expression of unpopular opinions needs that machine input in the first place. If you’re writing about something you strongly believe in – just do the work and write it yourself. Look at all those radical manifestos that made history – there’s something utterly disturbing in releasing that machine-made manifesto and expecting humans to be its followers.

Early 20th century Futurists would have loved this idea, perhaps. But the artistic movement, which encouraged speed, power of technology, dynamism, youth, and violence, waned at the start of the First World War. It took real violence to stop people seeing beauty in violence. I do hope that the third world war will remain contained in clusters rather than blow up in an all-encompassing conflict. And that people would just leave AI out of their arsenal.

Related News