AI or Not: ‘Purveyors of Truth’ Fighting Fake Images Amid Wars & 2024 Elections
With roughly 40 elections in the next 12 months around the world, the Ukraine-Russia and Israel-Hamas wars and widespread cultural unrest, society needs to be able to discern what’s true and what’s fake
After spending years backing promising start-ups as a venture capitalist at AMEX Ventures, Ukrainian-American, Anatoly Kvitnitsky, had ideas of his own. With the war in his home country heavy in his heart, fond memories of the birthright trip he took to Israel, and his professional background in data, risk and compliance, Kvitnitsky turned his attention from VC to artificial intelligence (AI).
His country and career had conditioned him to look for where things can go wrong, particularly with the hype of new technologies that can hypnotize the masses.
Kvitnitsky met Andrey Doronichev, founder of Optic, in San Francisco’s start-up scene. The pair had a lot to talk about, with the war waging in their home countries and a shared interest in the existential threat of AI.
The genesis of Optic and their product, AI or Not, was authenticating nonfungible token (NFT) images for marketplaces such as OpenSea. A worthy cause, but Kvitnitsky believed there are bigger problems that AI detection tools can and should be solving. Misinformation in politics and war, for example.
In October, Kvitnitsky took over as CEO of AI or Not, Optic’s generative AI detection tool – with the intention to prepare the United States and the world for the onslaught of election misinformation and disinformation to come.
With roughly 40 elections in the next 12 months around the world, the current Ukraine-Russia and Israel-Hamas wars, and widespread cultural unrest, society needs to be able to discern what’s true and what’s fake.
That’s becoming harder everyday with AI. Kvitnitsky said there’s been an 100-fold uptick in usage of AI or Not since the Israel-Hamas war started.
Biden’s Executive Order on AI thoughtful but “a little too late”
While Kvitnitsky admits he’s an AI optimist, he said generative AI technology could radically alter how we interact with and use regulatory technology and verification solutions to protect ourselves against fraud.
On October 30, President Biden released regulatory frameworks on the safety standards for AI. One of the executive actions includes a requirement that all companies developing foundation models that pose a national security risk to notify the federal government during the training period.
DALL·E and Midjourney have safeguards in place to prevent bad actors from producing things like smear campaigns and misinformation, but Kvitnitsky said it’s the open-source tools we have to worry about.
“For the open-source generative AI models already out there in the wild, this might be a little too late because they’re already out there without guardrails,” Kvitnitsky said.
“Even if regulation goes into effect today, these tools are already being trained on fake documents and images. No one’s going to return to sender. They’re only going to continue to proliferate,” he added.
“Question everything” – images, documents, voice & videos
The high-profile cases of AI-generated scams keep piling up – the voice-cloned video of Barack Obama, the viral MrBeast viral scam, the Tom Hanks deep fake video. There are also fake identity documents, bank statements, insurance claim images, and regular people being targeted with AI-generated FaceTime calls from their loved ones.
“They only need 20 or 30 seconds of audio to recreate someone’s voice and have them say literally anything,” said Kvitnitsky. Beyond the fear and financial devastation is the psychological impact.
Frequency bias is a tendency to notice something more often after hearing it for the first time.
“You can see how dangerous that is with politicians, celebrities, artists, and business leaders,” he said. “Even if you eventually figure out it’s fake, you still have that association. The psychological negative damage has already been done, by nothing that the individual did themselves.”
While AI or Not can detect fake voices, images, documents, and soon, video too, Kvitnitsky is narrowing in on detecting AI in political figures to protect the future of democracy.
Kvitnitsky also sees great opportunity in leveraging the blockchain to take a hash of a confirmed image or identity documents and put it on the blockchain so it's irreversible. While Kvitnitsky remains laser-focused on solving for misinformation and helping elevate the compliance space, it’s the ‘perfect’ marriage of the two technologies that he’ll likely explore in the future.
A tool to fight evil
“In the history of technology and innovation, every time a new tool has been created there’s been a dark side to it. With cryptocurrency, there’s the money laundering component. For streaming, we’re seeing what’s happening during the wars and how it’s getting used to broadcast atrocities,” Kvitnitsky said.
“AI is about to have its dark side moment as well. That’s the fight we’re in and we’re just making sure the tools are used for good, not evil.”
Beyond election misinformation and smear campaigns, Kvitnitsky said AI or Not will have a strong focus on biometric Anti-Money Laundering (AML) checks. “AML is to ensure criminals and politically-exposed individuals are identified,” he said. “Today this is done by checking names against lists of people, such as in the Office of Foreign Assets Control (OFAC). We’re developing products to do these checks via biometrics and face detection, in addition to determining if the image has been tampered with using generative AI or other methods.”