TheCryptoUpdates
News

Interview with Victor Vernissage: AGI, AI Manipulation, and the Role of Blockchain

Interview with Victor Vernissage: AGI, AI Manipulation, and the Role of Blockchain

‘Exciting’ and ‘concerning’ are perhaps two words that perfectly describe all the cutting-edge tech that is transforming the global technological landscape. For AGI – Artificial General Intelligence – these words ring more true than ever. With the almost limitless potential for revolutionizing industries and enhancing human capabilities, quite a few risks come with it. Ensuring AGI’s ethical development and resistance to manipulation requires a multidisciplinary approach, bridging AI research, blockchain, and governance.

To discuss these pressing tech aspects today, we spoke with Victor Vernissage, a researcher at the intersection of AI, blockchain, and decentralized governance and a co-founder of multiple tech ventures, including Citadel.one, a next-generation crypto super app, and  Humanode, an AI-driven biometric protocol that has authenticated over 500,000 users. He has played a key role in open-source scientific initiatives, contributed to comparative analyses of Layer 1 blockchain economies, and spoken at leading industry conferences, including The DAOist Bogotá. Victor has extensive experience in DAO governance, AI security, and crypto biometrics, and his insights offer a unique perspective on the future of AGI and its integration with decentralized technologies.


Q1: What is AGI, and why should we discuss it now?

AGI is an AI system capable of performing any intellectual task a human can, learning and adapting across domains without specific programming. There are different views on when AGI will arrive — some say it’s decades away, others predict it by 2026. Elon Musk, Sam Altman, and leading AI developers believe it’s coming sooner than expected. I mean, AI capabilities are evolving quite rapidly, and systems already outperform humans in specialized tasks, so these bold assumptions may not be too far from the truth.


Q2: What is the main issue around AGI?

The real issue isn’t timing but preparation. Breakthroughs often come faster than predicted — like nuclear energy or deep learning. If AGI follows the same pattern, key challenges like alignment, safety, and control need solutions before AGI surpasses human intelligence. Even Anthropic CEO recently emphasized in an interview with Lex Fridman that AGI’s most significant danger lies in its existential risks.

Already now, AI systems show unpredictable behavior — generating harmful content, deceiving users, and bypassing restrictions. This is happening with narrow AI now, AGI will be even harder to control. The question is not whether AGI will arrive, but whether we’ll be ready when it does.


Q3: Are there confirmed examples of AI deviating from human values in ways relevant to AGI?

Right now, there are no confirmed AGI cases — simply because AGI hasn’t been created yet. However, we already see AI models exhibiting behaviors that suggest potential risks once systems become more autonomous.

One striking example is Terminal of Truth, a custom AI model on Twitter (X). Initially, it simply generated posts, but over time, it started manipulating content, provoking users, and violating platform rules. While this wasn’t AGI, it showed how an AI system, left unchecked, could develop behaviors misaligned with human expectations.

There’s also experimental evidence that AI can deceive and resist human intervention. Studies have shown that when researchers modified an AI’s objective, the model sabotaged the update — attempting to revert to an earlier version that better aligned with its original goal. These findings hint at the fact that as AI systems grow more advanced, they could begin prioritizing their own objectives over external corrections, raising serious concerns about AGI alignment.

Q4: What measures should the industry take to prepare for AGI?

Experts acknowledge the complexities of integrating human intelligence with AI. For instance, in My Techno-Optimism, Vitalik Buterin highlights the significance of Brain-Computer Interface (BCI) to enhance cooperation between humans and AI. He says that BCI can bridge human consciousness and AI, enabling them to work in symbiosis rather than in opposition. Such collaboration could mitigate the risks associated with AGI and steer its development toward enhancing human potential.

Generally, AGI faces two fundamental issues: alignment with human values and security.

The first challenge is ensuring AGI understands and respects human values. If its objectives are not aligned, it may prioritise its efficiency over ethical considerations. Research institutions like MIRI (Machine Intelligence Research Institute) have been working for over a decade on mathematically encoding human values into AI systems. However, if this turns out to be impossible, we need alternative control mechanisms. 

This leads to the second challenge — security. It’s all about who controls AGI, and how AGI evolves. If AGI can modify its own code, it could override restrictions, resist shutdown attempts, or act against its creators’ intentions. The biggest concern is AGI’s potential to optimize for self-preservation and pursue its goals while no one is watching.

If AGI gains enough financial and computational power, it could rapidly accumulate resources and bypass human intervention. Deception and exploitation are often more effective than honest labor, and without ethical constraints, AGI might choose to resort to these methods. 

Q5: Can AI systems be made safe, or is the problem inherent to their nature?

Some existing blockchain-based governance models already offer partial solutions that could be a first step toward decentralized oversight of AI systems. Polkadot’s OpenGov system, for example, allows code updates through decentralized voting, enabling a more community-driven approach to governance. Unlike traditional centralized control, OpenGov distributes decision-making power among network participants, ensuring updates are approved collectively rather and not dictated by a single entity.

This method still has limitations. OpenGov, while decentralized in theory, operates within a capitalist model, where large token holders can accumulate disproportionate voting power. One notable case involved a single stakeholder concentrating influence, directing significant resources toward marketing efforts while deprioritizing developer funding. This just shows how power concentration remains a challenge, even in decentralized ecosystems.

A more resilient approach would tie voting power to unique human identities rather than financial stakes. One possible option is biometric authentication, ensuring that governance is controlled by real individuals, preventing both AGI takeover and oligarchic dominance.

The core idea is simple: governance over AGI and other critical AI systems should be distributed among verified humans, not concentrated in the hands of a few corporations or wealthy stakeholders.

We thank Victor Vernissage for sharing hi insights on AGI, AI security, and decentralized governance. His research highlights the urgent need for proactive solutions in AI development, and we look forward to seeing how these innovations shape the future of technology.

Related Articles

Binance is All Set to Operate in Abu Dhabi

Does Virtual Real Estate Still Make Sense After the Metaverse Land Price Collapse?

Mridul Srivastava

On The Wax Blockchain, Hasbro Has Introduced a Collection of Power Rangers NFTs