TheCryptoUpdates
Crypto

Altman Accuses Anthropic of Fear-Based Marketing for Claude Mythos

OpenAI CEO Sam Altman has accused Anthropic of using “fear-based marketing” to promote its new AI model, Claude Mythos. In a recent episode of the Core Memory podcast hosted by tech journalist Ashlee Vance, Altman argued that Anthropic’s approach is designed to keep powerful AI in the hands of a select few. He suggested that by highlighting the model’s potential dangers, Anthropic can justify restricting access to it. “You can justify that in a lot of different ways, and some of it’s real, like there are going to be legitimate safety concerns,” Altman said. “But if what you want is like ‘we need control of AI, just us, because we’re the trustworthy people’, I think fear-based marketing is probably the most effective way to justify that.”

Altman compared the situation to selling a bomb shelter: “It is clearly incredible marketing to say: ‘We have built a bomb. We are about to drop it on your head. We will sell you a bomb shelter for $100 million. You need it to run across all your stuff, but only if we pick you as a customer.'” He acknowledged that balancing AI’s new capabilities with OpenAI’s belief in accessible technology isn’t always easy. The remarks come as Claude Mythos, revealed last month, has drawn intense scrutiny from researchers, governments, and the cybersecurity industry. Tests suggest the model can autonomously identify software vulnerabilities and execute complex cyber operations. Anthropic distributes it only to a limited set of organizations through a restricted program called Project Glasswing, which includes companies like Amazon, Apple, and Microsoft.

The Divide Over AI Deployment

The rollout of Mythos reflects a broader split in the AI industry. Some companies, like Anthropic, advocate for controlled access to powerful systems, citing safety concerns. Others, like OpenAI, argue for wider distribution to accelerate innovation and understanding. Early this month, Mythos identified hundreds of vulnerabilities in Mozilla’s Firefox browser during testing, and it has also demonstrated the ability to carry out multi-stage cyberattack simulations. Anthropic has framed the model as both a defensive breakthrough and an offensive risk. It has also committed resources to supporting open-source security efforts, arguing that defenders should benefit from the technology before it becomes widely available.

However, security experts warn that the same capabilities that allow Mythos to identify vulnerabilities could also be used to exploit them at scale. Tests by the UK’s AI Security Institute found the model could autonomously complete complex cyber operations. The model has also exposed limitations in existing AI evaluation systems, with Anthropic acknowledging that many current cybersecurity benchmarks are no longer sufficient to measure the capabilities of its latest system. Interestingly, a group of researchers claimed last week they were able to reproduce Mythos’ findings using publicly available models, perhaps questioning just how exclusive the technology really is.

Government Interests and Predictions

Despite calls within parts of the U.S. government to halt the use of Mythos over concerns about its potential applications in warfare and surveillance, the National Security Agency has reportedly begun testing a preview version on classified networks. On prediction market Myriad, users put a 49% chance on Claude Mythos being released to the wider public by June 30. Altman suggested that rhetoric around highly dangerous AI systems may increase as capabilities improve, but argued that not all claims should be taken at face value. “There will be a lot more rhetoric about models that are too dangerous to release. There will also be very dangerous models that will have to be released in different ways,” he said. “I’m sure Mythos is a great model for cybersecurity but I think we have a plan we feel good about for how we put this kind of capability out into the world.”

Altman also dismissed suggestions that OpenAI is scaling back its infrastructure spending, saying the company would continue expanding its computing capacity despite shifting narratives. “I don’t know where that’s coming from… people really want to write the story of pulling back,” he said. “But very soon it will be again, like, ‘OpenAI is so reckless. How can they be spending this crazy amount?'” As the debate over AI safety and accessibility continues, it seems both companies are positioning themselves for the future, each with their own version of what responsible AI development looks like.

Loading

Related posts

SVB Financial Group Files for Chapter 11 Bankruptcy: Analysis of the Latest Developments

The Binance-Gulf Energy Venture: Ushering in a New Era of Crypto-Exchange in Thailand

Republican Campaign Committee To Welcome Crypto Donations

Kshitij Chitransh
Close No menu locations found.