In a recent interview, Michael Gerstenhaber, VP of Product for Google Cloud‘s Vertex AI, offered a new framework for understanding the evolution of artificial intelligence. He argues that the race for AI dominance is not just about raw intelligence but is being fought across three distinct frontiers: raw intelligence, response time, and cost-effective scalability. This perspective provides a valuable lens for founders and developers navigating the complex AI landscape.
A Vertically Integrated AI Powerhouse
Gerstenhaber, who previously spent a year and a half at Anthropic, explained that his move to Google was driven by the company’s unique vertical integration. He highlighted Google’s control over the entire AI stack, from building data centers and proprietary chips to developing models and consumer-facing chat interfaces.
“Part of the reason I came here is because I saw Google as uniquely vertically integrated, and that being a strength for us,” he noted, emphasizing the company’s ability to manage everything from the infrastructure layer to the agentic and application layers.
The Three Frontiers of AI Development
While many perceive the AI race as a straightforward competition for superior intelligence, Gerstenhaber reframes the challenge by identifying three separate boundaries that different models are designed to push.
The first frontier is raw intelligence. “Think about writing code. You just want the best code you can get, doesn’t matter if it takes 45 minutes, because I have to maintain it, I have to put it in production,” he explained. For complex, high-stakes tasks, the absolute quality of the output is the only metric that matters.
The second frontier is latency, or response time. In use cases like customer support, the speed of the answer is as critical as its accuracy. “It doesn’t matter how right you are if it took 45 minutes to get the answer… you want the most intelligent product within that latency budget, because more intelligence no longer matters once that person gets bored and hangs up the phone,” Gerstenhaber stated.
The third and final frontier is cost at scale. This is crucial for applications that must handle massive, unpredictable volumes of data, such as content moderation for social media platforms. “They have to restrict their budget to a model at the highest intelligence they can afford, but in a scalable way to an infinite number of subjects. And for that, cost becomes very, very important.”
The Lag in Agentic AI Adoption
Despite the incredible potential of agentic AI systems demonstrated over the past two years, widespread adoption has been slow. Gerstenhaber attributes this to a lack of mature infrastructure.
“We don’t have patterns for auditing what the agents are doing. We don’t have patterns for authorization of data to an agent,” he said. He pointed out that while the technology is ready, the production environments are not, as enterprises still need to develop the necessary frameworks for governance, compliance, and human-in-the-loop oversight before deploying these powerful tools at scale.
Relevance for MENA Startups
Gerstenhaber’s framework offers a strategic guide for MENA startups building with AI. Rather than simply integrating the most powerful model, founders in the region can now assess their specific needs against these three frontiers to optimize for performance and cost.
A Dubai-based fintech startup, for example, might prioritize a low-latency model for real-time fraud detection, where speed is paramount. Conversely, an e-commerce platform in Riyadh scaling across the GCC may need a cost-effective model for moderating user-generated reviews, balancing intelligence with budget. For deep-tech startups in hubs like KAUST or Masdar City, focusing on models with raw intelligence could be the key to solving complex scientific or engineering problems. This nuanced approach allows MENA businesses to make more informed, capital-efficient decisions when deploying AI solutions tailored to their unique market challenges and opportunities.
About Google Cloud
Google Cloud is the cloud computing service of Google, offering a suite of services for computing, storage, networking, big data, machine learning, and the Internet of Things (IoT). Its Vertex AI platform provides a unified environment for enterprises to build, deploy, and scale machine learning models and AI applications.
Source: TechCrunch


