AI expert discusses the gap between democratic ideals and current AI development
Ben Goertzel, the AI researcher and thinker, recently shared some pretty sobering thoughts about where we stand with artificial intelligence governance. In an interview, he pointed out something that many of us might feel but don’t always say out loud: democratic AI governance is more of a fragile ideal than a current reality.
That’s a tough pill to swallow, I think. We talk about wanting democratic, transparent systems, but Goertzel suggests our current geopolitical situation makes true global coordination pretty unlikely. He’s not saying we should give up on the idea, but rather that we need to be realistic about what’s actually possible right now.
From tools to moral actors
One of the more interesting parts of the conversation was about when AI transitions from being just a tool to becoming something more—a moral actor. Goertzel suggests this happens when AI starts making decisions based on its own understanding of right and wrong, not just following our instructions. You’d see signals like persistent internal goals, learning driven by its own experience, and behavior that stays coherent over time without constant human steering.
Until then, he says, today’s systems are still tools with guardrails. But once we create a genuinely self-organizing, autonomous mind, our ethical relationship with it has to change. Treating it only as an object wouldn’t make sense anymore.
The training problem
Goertzel keeps coming back to how we train AI today shaping its future behavior. He’s worried that if models are trained on biased or narrow data, or in closed systems where only a few people make decisions, we’ll just lock in existing inequalities and harmful power structures.
“To prevent this,” he says, “we need more transparency, wider oversight, and clear ethical guidance right from the start.” It’s not something we can fix later, apparently. The foundations matter.
Decentralization as a safety feature
Here’s where things get counterintuitive. Goertzel argues that accelerating toward decentralized AGI might actually be safer than sticking with today’s proprietary, closed systems. Critics often call for slowing down or centralizing control, but he thinks they’re underestimating the risks of concentrating power in a few hands.
“Slowing down and centralizing control doesn’t just reduce danger,” he explains, “it locks one narrow worldview into the future of intelligence.” Decentralized development, on the other hand, creates diversity, resilience, and shared oversight.
Teaching morality, not programming it
One of the more challenging ideas Goertzel presents is about how to encode moral understanding without simply hard-coding human values. He doesn’t want a system that just recombines what it was fed. He wants one that can develop its own understanding from its own trajectory in the world.
“Moral understanding would come from that same process,” he suggests, “modelling impact, reflecting on outcomes, and evolving through collaboration with humans. Not obedience to our values, but participation in a shared moral space.”
That’s the difference, he says, between a tool with guardrails and a partner that can actually learn why harm matters.
Looking ahead
When asked about what success or failure would look like in 10 to 20 years, Goertzel paints two very different pictures. Success would mean living alongside systems more capable than us in many domains, yet integrated into society with care, humility, and mutual respect. We’d see real benefits for human wellbeing without losing our moral footing.
Failure, on the other hand, would look like AGI concentrated in closed systems, driven by narrow incentives, or treated only as a controllable object until it becomes something we fear. It would mean loss of trust, loss of agency, and a shrinking of our empathy rather than an expansion of it.
Perhaps the most telling part is his answer about incentives. Right now, he notes, the incentive structure rewards speed, scale, and control. Compassion won’t win by argument alone—it needs leverage. Technically, that means favoring open, decentralized architectures. Socially, it means funding, regulation, and public pressure that reward long-term benefit over short-term dominance.
“In short,” he concludes, “compassion has to become a competitive advantage. Until then, it stays a nice idea with no power.” That’s a pretty stark assessment of where we are, and maybe a useful challenge for where we need to go.
![]()


