New Warning System for Altered Media
X has quietly rolled out a new warning system that automatically labels posts containing doctored images and videos. The “manipulated media” tag appears directly beneath posts that contain intentionally altered visuals designed to deceive users. When people try to share these flagged posts, they also receive a warning prompt in the share menu.
The system links to X’s existing policy page about authenticity and deceptive media, which was published back in April. I think this is interesting timing, coming just as we’re seeing more sophisticated AI tools making it easier than ever to create convincing fake content.
Broader Context of Platform Changes
This update arrives during a period of increased pressure on social platforms to address election-related misinformation. The platform recently released its first formal transparency report since Elon Musk’s acquisition, showing they suspended 5.3 million accounts in the first half of the year. They also removed or labeled 10.7 million posts for various policy violations.
There’s been ongoing criticism about the platform’s approach to harmful content. Advocacy groups have argued the company doesn’t act quickly enough, while Musk has defended the platform’s transparency efforts, particularly pointing to the Community Notes feature as an effective user-driven fact-checking system.
Additional Account Transparency Features
Alongside the manipulated media labels, X also introduced new account transparency features over the weekend. These show users’ location information, account creation dates, and how many times usernames have been changed. The data appears at the top of profiles, with more details available under a “Joined” tab.
X’s Head of Product Nikita Bier described this as “an important first step to securing the integrity of the global town square.” He mentioned they’re developing more features to help users identify credible content. There are privacy considerations built in though – users in countries with limited free speech can display only their region rather than specific country, and location updates are delayed and randomized.
Immediate Impact and Reactions
The rollout hasn’t been completely smooth. Bier acknowledged there were “rough edges” and said some information still under review would be restored by Tuesday. Already, several accounts have been deleted after users spotted inconsistencies in their listed locations versus their claimed origins.
One notable example was an account named “ULTRAMAGA TRUMP2028” that claimed to be based in Washington, DC but was actually listed as located in Africa. These kinds of discrepancies are exactly what the new transparency features are designed to reveal.
It’s worth noting that while these changes represent steps toward greater platform transparency, they’re happening against a backdrop of scaled-back moderation operations and reversed enforcement standards since Musk took over. The effectiveness of these new measures will likely depend on how consistently they’re applied across all types of content and users.
![]()


