In a striking turn of events, AI safety and research company Anthropic‘s chatbot, Claude, has seen a dramatic surge in popularity, climbing to the number two spot for free apps in Apple’s US App Store. The newfound attention appears directly linked to the company’s recent and highly public negotiations with the Pentagon over ethical AI safeguards.
The AI chatbot now sits just behind OpenAI’s ChatGPT, which holds the top position, and ahead of Google’s Gemini at number three, placing it firmly among the top contenders in the competitive generative AI space.
A Rapid Ascent in Rankings
According to data from market intelligence firm SensorTower, Claude’s rise has been swift. At the end of January, the app was ranked outside the top 100. Throughout February, it maintained a position within the top 20, but its ranking accelerated significantly in the last week, moving from sixth place to fourth, and finally landing at number two on Saturday.
This meteoric climb highlights a significant increase in public awareness and user adoption, catalyzed by mainstream news coverage of the company’s policy dispute.
The Pentagon Dispute
The surge in interest follows Anthropic’s attempt to negotiate safeguards with the US Department of Defense. The company sought to prevent its AI models from being used for applications such as mass domestic surveillance or fully autonomous weapons systems.
This stance led to a swift government response, with President Donald Trump directing federal agencies to cease using all Anthropic products. Furthermore, Secretary of Defense Pete Hegseth designated the company a supply-chain threat. In the wake of this dispute, competitor OpenAI announced its own agreement with the Pentagon, with CEO Sam Altman stating it includes safeguards similar to those Anthropic had advocated for.
Implications for the MENA AI Landscape
This incident serves as a critical case study for the burgeoning AI ecosystem in the MENA region. As governments across the Gulf, particularly in the UAE and Saudi Arabia, accelerate their national AI strategies and investments, the ethical considerations of AI deployment become paramount.
The public’s positive reaction to Anthropic’s ethical stance—rewarding it with downloads—suggests that corporate values can translate into market advantage. For MENA-based AI startups, this underscores the importance of establishing clear ethical frameworks, especially when pursuing public sector and defense contracts. The dispute demonstrates that navigating the complex intersection of technology, ethics, and government policy is now a key factor for success and public perception in the global AI race.
About Anthropic
Anthropic is an AI safety and research company dedicated to building reliable, interpretable, and steerable AI systems. Founded by former members of OpenAI, the company is focused on developing advanced AI models, like Claude, while prioritizing research into the safety and societal impacts of artificial intelligence.
Source: TechCrunch


