Primus Raises $3.5 Million to Advance AI Agent Verification and Compliance
CEO Dr. Xiang Xie details how advances in AI will mean more sophisticated ways of verifying data authenticity
As artificial intelligence (AI) becomes more immersive the need to discern fact from fiction and the real from the fabricated will become ever more pressing. Verification and proof of authenticity will especially be needed in the financial technology (fintech) industry, where privacy and protecting user data are paramount.
Helping address concerns around AI agent data validity is Primus, an on-chain data validation company for AI large language model (LLM) information, that recently raised $3.5 million in seed round funding from Dispersion, Symbolic Capital and VanEck Ventures.
“We are very fortunate to have our investors,” Dr. Xiang Xie, co-founder and chief executive officer of Primus, shared with me during a recent interview. “We want to promote our project to the community and collaborate with other companies to explore the possibilities of using our technologies to solve the problems of their portfolios.”
The backing from VanEck is one of the first investments from its new fund, which became interested in the privacy and verification space after witnessing companies like Inco and others build use cases for the tech.
“Then we met the Primus team, which had developed some of the most cutting edge tech,” Wyatt Lonergan, general partner at VanEck, told me during a recent interview. “They were a great team of cryptographers and we could immediately see how their technology could be implemented into real world use cases.”
Primus is using zero-knowledge proofs – a type of cryptography where a transaction can be verified between two unknown parties without any of the transaction details being shared – for data validation in the AI agent era. The trick is to verify that an agent’s reasoning is derived from an LLM and not a human. One part of the Primus tech ensures the authenticity of agent actions, while another enables confidential decision-making.
Read more: The Early Take on How Blockchain Can Enhance AI, According to EY’s Paul Brody
That means an AI agent’s activities, such as posting a tweet or initiating a transaction, are done by the program without human manipulation. It’s an important validation tool to ensure the information AI agents use to make decisions comes from the real source and wasn’t manipulated by the agent itself.
To date, Primus has over 200,000 user downloads that help securely prove web2 information—a mission Xie has been passionate about as a longtime cryptographer.
“Around 2022, I was working at a startup in Asia-Pacific on how to design privacy preserving protocols for data products,” Xie said. “We were thinking about how we could use cryptography to bring verifiable data to web3, and that was the beginning of Primus.”
Lonergan believes Primus is building the middleware that’s poised to become a key player in user data privacy as blockchain adoption grows.
“As we move more towards a digital-first infrastructure, you need compliance to be programmed first and foremost,” Lonergan said. “Just because you can move value from wallet A to wallet B doesn’t mean you don’t follow the rules. If we’re already in a digital environment, in our view, you should try to create a system where you’re disclosing as little information as possible and not storing the personal information of those consumers or those businesses.”
The need to make data more secure and verifiable in the future will be even more pertinent given the continued rise of disinformation, much of which is derived from non-credible sources. Primus has the potential to provide such credibility through on-chain mechanisms, particularly in the finance sector.
Data authenticity
“Blockchain will finally be used as a settlement layer,” Xie said. “Data verifiability should happen on-chain and can happen through smart contracts, which will reduce the time of settlements. Privacy preserving technology is also the same. The main difference between web2 and web3 is that web3 can have much faster settlement speeds.”
As AI continues to evolve and improves the way its processed in the background will need to change too, Xie said. “At the very beginning, giant companies like OpenAI were focused on training very large language models by using public Internet data,” he said. “When we go to customized applications, the general models are not useful enough for them, so they’ll have to use customized data to finetune the model and ensure the model is suitable.”
Xie said this gives rise to a major problem of verifying that the information being fed to the language model hasn’t been manipulated.
“We have to verify the source of the data to ensure the information being used is correct,” Xie said. “Usually, these kinds of customized data are owned by a human, which means the data is sensitive. Once the data is fed to the agent, you don’t want to expose your system data like private keys or other sensitive information. Privacy preserving technology is the key to making sure the whole process works without compromising the human owner’s sensitive data.”
Xie sees privacy preserving technology as a necessary layer to the expansion of AI agents and customized applications, and their overall scalability. But it’s not just the privacy preserving nature of this layer, it’s also about data authenticity and being able to track the origin—and potential tampering—of information.
“When you’re bringing data into an isolated system, you need to make sure that it belongs to you and is correct,” Xie said. “A lot of people are using AI to solve real world problems, and it's only the very beginning of AI agents in web3. I personally believe that in the future, every person will have their own digital agent to solve problems for them. It’s like a digital ‘me,’ a digital Xiang, to help me solve everything, and I think that will be the future we can expect.”
lead image: Xiang Xie