In the midst of the 2024 U.S. election campaign, a deepfake video falsely alleging voter fraud proliferated across social media platforms. In healthcare, biased data has skewed AI outcomes, putting patient care in jeopardy. Opaque algorithms have destabilized markets, clouded decision-making processes, and instigated a loss of faith in financial systems. As such, the risks associated with AI are becoming increasingly apparent, and its shortcomings are leading to a decline in public confidence.
Charles Adkins, CEO of HBAR Foundation and prior President of Hedera Hashgraph, LLC, advocates for a governance system that ensures AI serves humanity rather than causing harm. However, the scope and intricacy of AI development exceed human capabilities. This is where Distributed Ledger Technology (DLT) comes in. This decentralized system records and verifies data across multiple nodes and, in doing so, brings transparency, accountability, and integrity to AI. This fosters trust, prevents monopolistic control, and encourages ethical innovation.
One of the main issues with AI is its tendency to operate like a black box, with secretive data obscuring the decision-making process. This lack of transparency can be particularly damaging in sectors such as healthcare and finance, where transparency is paramount. DLT changes this by recording all data and updates on an immutable ledger, ensuring all changes are traceable.
ProveAI is an example of a platform that utilizes DLT to secure and track AI training data and updates, thereby ensuring compliance with ethical standards and pertinent regulations such as the EU AI Act. This approach holds AI models accountable and lays the groundwork for trust and fairness in their outcomes.
However, poor data quality is a persistent issue in AI development. A 2024 survey by Precisely revealed that 64% of businesses find AI unreliable due to unverified or biased data. By attaching real-time data to decentralized networks, DLT ensures data is accurate, transparent, and immutable.
Platforms like Fetch.ai and Ocean Protocol are already demonstrating the potential of this innovation. Fetch.ai uses oracles to access real-time external data, optimizing logistics and energy efficiency across the Web3 ecosystem. Ocean Protocol facilitates secure tokenized data sharing, allowing AI systems to access high-quality datasets while protecting user privacy.
These platforms are instrumental in combating misinformation, particularly in the context of deepfakes. Ofcom recently revealed that 43% of people aged 16 and older encountered at least one deepfake online in the first half of 2024. Blockchain platforms like Truepic are combating this issue by integrating blockchain with image authentication, timestamping, and verifying media at the moment of creation.
However, centralized governance models often struggle to manage the rapid pace, complexity, and ethical challenges of AI development. Precisely’s global survey revealed that 62% of organizations see inadequate governance as a major obstacle to AI adoption. Decentralized Autonomous Organizations (DAOs), powered by DLT, may provide a solution by automating governance and decision-making through smart contracts.
As AI increasingly relies on cross-border data, secure and transparent systems like DLT will be crucial to building trust. Governments, enterprises, and civil society must collaborate to develop governance frameworks that prioritize the public interest. DAOs must also evolve to provide flexible, collective oversight as AI technology progresses.
The future of ethical AI hinges on decisive action today. DLT can provide the foundation for this future—transparent, accountable, and aligned with humanity’s best interests.