Google simply made a significant transfer within the AI infrastructure arms race, elevating Amin Vahdat to chief technologist for AI infrastructure, a newly created place reporting on to CEO Sundar Pichai, in accordance with an inside memo first reported by Semafor. It’s a sign of simply how crucial this work has develop into as Google pours as much as $93 billion into capital expenditures by the tip of 2025 — a quantity that mum or dad firm Alphabet expects will likely be a complete lot larger subsequent 12 months.
Vahdat isn’t new to the sport. The pc scientist, who holds a PhD from UC Berkeley and began as a analysis intern at Xerox PARC again within the early ’90s, has been quietly constructing Google’s AI spine for the previous 15 years. Earlier than becoming a member of Google in 2010 as an engineering fellow and VP, he was an affiliate professor at Duke College and later a professor and SAIC Chair at UC San Diego. His tutorial credentials are formidable — with what seems to be round 395 published papers — and his analysis has all the time centered on making computer systems work extra effectively at huge scale.
Vahdat already maintains a excessive profile with Google. Simply eight months in the past, at Google Cloud Subsequent, he unveiled the corporate’s seventh-generation TPU, referred to as Ironwood, in his function as VP and GM of ML, Techniques, and Cloud AI. The specs he rattled off on the occasion had been staggering, too: over 9,000 chips per pod delivering 42.5 exaflops of compute — greater than 24 instances the ability of the world’s No. 1 supercomputer on the time, he mentioned. “Demand for AI compute has elevated by an element of 100 million in simply eight years,” he advised the viewers.
Behind the scenes, as famous by Semafor, Vahdat has been orchestrating the unglamorous and important work that retains Google aggressive, together with these customized TPU chips for AI coaching and inference that give Google an edge over rivals like OpenAI in addition to the Jupiter community, the super-fast inside community that permits all its servers to speak to one another and transfer huge quantities of information round. (In a blog post late final 12 months, Vahdat mentioned that Jupiter now scales to 13 petabits per second, explaining that’s sufficient bandwidth to theoretically help a video name for all 8 billion folks on Earth concurrently.) It’s the invisible plumbing connecting the whole lot from YouTube and Search to Google’s huge AI coaching operations throughout tons of of information middle materials worldwide.
Vahdat has additionally been deeply concerned within the ongoing improvement of the Borg software program system, Google’s cluster administration system that acts because the mind coordinating all of the work taking place throughout its information facilities and whose job is to determine which servers ought to run which duties, when, and for the way lengthy. And he has mentioned he oversaw the event of Axion, Google’s first customized Arm-based general-purpose CPUs designed for information facilities, which the corporate unveiled last year and continues to construct.
Briefly, Vahdat is central to Google’s AI story.
Certainly, in a market the place high AI expertise instructions astronomical compensation and fixed recruitment, Google’s choice to raise Vahdat to the C-suite may be about retention. While you’ve spent 15 years constructing somebody right into a linchpin of your AI technique, you ensure they keep.
Techcrunch occasion
San Francisco
|
October 13-15, 2026
tech/”>Supply hyperlink

