Anthropic is scaling up its partnership with Google to access as many as one million of the tech giant’s AI chips, worth tens of billions of dollars, as the startup accelerates development of its AI systems in a fiercely competitive market.
Announced in late October, the expanded deal gives Anthropic access to more than one gigawatt of computing capacity, set to come online in 2026. The capacity will be used to train the next generations of the company’s Claude AI model on Google’s in-house tensor processing units (TPUs), which were traditionally reserved for internal use.
Anthropic said it selected Google’s TPUs for their efficiency, price-performance ratio, and the company’s prior experience training and deploying Claude models on the processors.
The agreement reflects the surging global demand for AI chips, as companies race to develop systems capable of rivaling human intelligence. Alphabet-owned Google, whose TPUs are also available through Google Cloud as an alternative to supply-constrained NVIDIA chips, will provide additional cloud computing services to Anthropic.
Rival OpenAI has inked multiple deals, reportedly costing over $1 trillion, to secure roughly 26 gigawatts of computing capacity—enough to power around 20 million U.S. homes. Industry executives estimate that one gigawatt of compute can cost roughly $50 billion. OpenAI is actively using NVIDIA and AMD AI chips to meet growing demand.
According to news reports earlier this month, Anthropic expects to more than double, and potentially nearly triple, its annualized revenue run rate next year, driven by the rapid adoption of its enterprise AI products.
The startup emphasizes AI safety and building models for enterprise use cases, with its technology supporting a wave of emerging startups in the AI coding space, including companies such as Cursor.
Enhance the efficiency, accuracy, and reliability of your AI solution.
Learn more about our comprehensive testing solutions and how we can fine-tune your AI product.




