X tightens rules on AI-generated conflict content
X has announced a significant policy shift that will see creators suspended from the platform’s Creator Revenue Sharing Program if they post AI-generated videos of armed conflict without clear disclosure. The suspension period is set at 90 days for first-time violations, with repeat offenses leading to permanent removal from the monetization program.
Nikita Bier, X’s head of product, made the announcement in a post on the platform. He explained that during times of war, access to authentic information becomes particularly critical. Modern AI tools, he noted, have made it remarkably easy to produce misleading material that could potentially influence public perception of ongoing conflicts.
How enforcement will work
The platform plans to use multiple methods to identify violations. According to Bier, enforcement will leverage the Community Notes feature—that crowdsourced fact-checking system that’s been evolving on X. They’ll also examine metadata and other signals embedded in the content itself.
I think this approach makes sense, though I wonder about the practical challenges. Community Notes can be helpful, but they’re not perfect. And metadata analysis? Well, that requires sophisticated detection systems. Still, it’s a step in what feels like the right direction.
The broader context
This update specifically targets AI-generated combat content posted without disclosure, coming amidst ongoing geopolitical tensions in the Middle East. X has been expanding its disclosure policies gradually, trying to ensure that misleading synthetic media doesn’t erode trust in information shared on its platform.
The Creator Revenue Sharing Program itself is interesting. It allows eligible users to earn income based on engagement metrics and subscription interactions. To qualify, creators need to meet certain criteria—verified status, engagement thresholds, that sort of thing. The payouts are tied to X Premium user engagement rather than traditional ad revenue models.
Some thoughts on implementation
What strikes me is the timing. This policy arrives when AI-generated content is becoming increasingly sophisticated and harder to detect. The 90-day suspension seems substantial enough to deter casual violations, but I’m curious about how they’ll handle borderline cases.
Will there be appeals? What constitutes “clear disclosure” exactly? These details matter, and they’re not fully spelled out in the initial announcement.
Also, the permanent removal for repeat violations—that’s a serious consequence. It shows X is treating this as a significant trust and safety issue, not just a minor policy violation.
Platforms are walking a difficult line these days. They want to encourage creator content and monetization, but they also need to maintain some level of information integrity. This move suggests X is prioritizing the latter when it comes to sensitive conflict situations.
It’s a developing story, and I suspect we’ll see more platforms adopting similar measures as AI generation tools become more accessible. The challenge will be balancing enforcement with not stifling legitimate creative expression. But for war content? Perhaps stricter rules are warranted.
We’ll need to watch how this plays out in practice. Enforcement consistency, detection accuracy, and creator response—all these factors will determine whether this policy actually improves information quality or just creates new problems.
![]()



