The old model of securing technology by locking powerful systems away in a lab is obsolete, according to OpenAI co-founder and CEO Sam Altman. In a world now filled with countless AI tools, he argues that the only effective strategy for global defense is to use artificial intelligence to defend against emerging AI-powered threats.
Quick Facts
- Centralized, lab-based safety testing is outdated.
- AI must be the primary tool for global defense.
- Massive compute expansion is needed to prevent inequality.
A New Digital Safety Paradigm
The early approach to AI safety assumed society would only need to manage a few powerful, restricted programs. However, the current reality is a crowded market of both open and closed systems, making it impossible to monitor and control every piece of software.
Altman notes this complex environment requires a shift in thinking. Instead of trying to contain a single program, society must prepare for a scenario where automated tools operate everywhere. This means responsibility for safety can no longer rest with one company but must be distributed across many groups to create a more resilient network.
“The picture [of AI safety] now is actually more stable but more complex,” Altman explains.
Automated Shields, Not Written Rules
In this new environment, written rules and traditional regulations are not enough to stop sophisticated digital and biological attacks. While developers must build safe software, controlled lab tests have clear limits when confronted by real-world adversaries.
The most effective solution, Altman argues, is to build automated defenses using the same advanced AI that powers the threats.
“There’s AI that is really good at exploiting computer systems,” Altman says. “Let’s use AI to defend them.”
By deploying intelligent tools directly into critical infrastructure like national security networks and hospital databases, defense systems can detect, block, and neutralize attacks at machine speed, often before human operators are even aware of a breach.
The Data Center Dilemma
Building these massive automated shields requires an enormous amount of physical hardware. Altman warns that if the computing power required to run these programs remains in short supply, it will create deep inequality, as only the wealthiest companies and individuals will be able to afford protection.
“The richest people and the richest companies in a world will just sort of bid up the price to a kind of extreme degree,” he notes.
To prevent a handful of highly capitalized companies from monopolizing the technology sector, Altman stresses that the industry must focus on manufacturing and building more data centers. He compares making server space plentiful and cheap to how the mass availability of electricity improved global living standards.
“More data centers is actually a very egalitarian initiative,” Altman argues.
Relevance for MENA: Sovereign AI and Economic Diversification
Altman’s call for a massive expansion of computing infrastructure resonates strongly with the strategic ambitions of nations across the Middle East. Countries like the UAE and Saudi Arabia are investing billions to build their own data centers and AI ecosystems, viewing sovereign compute capacity as critical for national security and economic independence.
This push is central to economic diversification plans, such as Saudi Vision 2030, which aim to reduce reliance on oil by fostering a domestic tech industry. By controlling their own AI infrastructure, MENA nations can not only deploy the kind of automated defenses Altman describes but also cultivate local talent and homegrown AI solutions without depending on foreign tech giants. The race for compute is not just about security; it’s about shaping the region’s economic future.
Counterpoints: The Soaring Cost of Frontier AI
While Altman presents more data centers as a path to democratizing AI, the economics of building frontier models paint a more complex picture. Stanford’s 2025 AI Index highlights the extraordinary training costs, with estimates putting GPT-4 at roughly US$78 million and Gemini Ultra at about US$191 million in compute alone.
Furthermore, research firm Epoch AI projects that training costs for the largest models could exceed US$1 billion by 2027. This suggests that even with more data centers, the cutting edge of AI development may remain concentrated among a small number of companies that can afford the escalating scale.
Recent studies also challenge the idea that AI tools are seamlessly eliminating technical barriers. A 2025 randomized controlled trial by METR found that experienced developers using frontier AI tools took 19% longer to complete tasks. Similarly, the 2025 Stack Overflow Developer Survey revealed that more developers distrust AI output accuracy (46%) than trust it (33%), indicating that human judgment and technical skill remain essential.
About OpenAI
OpenAI is an AI research and deployment company. Its mission is to ensure that artificial general intelligence (AGI)—AI systems that are generally smarter than humans—benefits all of humanity. The company is known for developing prominent AI models such as GPT-4 and DALL-E.
Source: Tech in Asia


