Demis Hassabis, CEO of the pioneering artificial intelligence lab Google DeepMind, has issued a significant warning about the near-term risks associated with advanced AI, urging the global tech community to take immediate action. Speaking at the AI Impact Summit in New Delhi, Hassabis highlighted that the emergence of artificial general intelligence (AGI) is no longer a distant possibility.
The AGI Timeline and Imminent Threats
According to Hassabis, AGI—a form of AI with human-like cognitive abilities—could potentially be developed within the next five to eight years. This accelerated timeline brings urgent risks to the forefront, particularly in the realms of biosecurity and cybersecurity.
He expressed deep concern that as AI systems become more powerful, they could be misused by “bad actors,” including individuals, organized groups, or even nation-states, to create novel threats that society is unprepared for.
A Call for Proactive Guardrails and Global Standards
To mitigate these potential dangers, Hassabis emphasized the critical need for the global community to establish robust guardrails. He called for an intensified international dialogue aimed at creating global standards for AI safety and ethics.
The primary goal of these measures would be to prevent advanced AI systems from acting in unpredictable or harmful ways. By proactively building safety protocols and fostering international cooperation, the industry can work to ensure that the development of AGI proceeds responsibly.
Implications for the MENA Tech Ecosystem
Hassabis’s warning resonates powerfully within the rapidly advancing MENA tech landscape. With nations like the UAE and Saudi Arabia investing billions to become global AI leaders, the ethical and security considerations of advanced AI are paramount.
For the region’s founders, VCs, and policymakers, this is a direct call to embed safety, ethics, and security into the core of their AI strategies. As MENA continues to build and deploy sophisticated AI models, embracing global standards and participating in the international dialogue on AI safety will be crucial for fostering sustainable innovation and building trust in the technology.
About Google DeepMind
Google DeepMind is a leading AI research laboratory and a subsidiary of Google. The company’s long-term goal is to solve intelligence, developing more general and capable problem-solving systems, known as artificial general intelligence (AGI). Its research aims to advance science and benefit humanity by creating safe and responsible AI.
Source: Tech in Asia


