How the Datacenter Power Issue Is Being Addressed
Power: That’s the headline behind the news this week. As OpenAI unveils a raft of new Stargate datacenter sites, and NVIDIA pledges $100 billion to help OpenAI build 10 gigawatts worth of AI infrastructure, the question persists as to how these buildouts will be fueled. The current electrical grid won’t be sufficient. What’s the solution?
That question has multiple answers. But let’s look first at the context. According to CNBC, 10 gigawatts of power represents what’s needed to serve 7.5 million homes, or a geographic area the size of New York City. It would require the power of 2.5 state-of-the-art nuclear reactors, 15 large natural gas plants, and about 3,000 wind turbines covering an area the size of Rhode Island. “These are massive numbers and the capacity does not exist right now,” said CNBC anchor Brian Sullivan.
Many issues arise: Given the current U.S. administration’s aversion to renewable energy (and claims that it’s not reliable), how fast can nuclear power plants be licensed for datacenter use? Currently, it can take up to 10 years to get a nuclear plant up and running.
Apart from nuclear fuel, natural gas has taken the top spot in fueling datacenters in the U.S. Here, the trend is to go “off grid,” an approach in which gas providers siphon off portions of their main fuel pipelines and dedicate those to specific datacenter customers. This has the advantage of keeping the datacenters from competing with other businesses and residences. On the downside, natural gas remains a carbon-emitting substance and is generally considered to be a “bridge” solution between traditional electrical and future renewable energy sources.
To access the rest of this article, you need a Futuriom CLOUD TRACKER PRO subscription — see below.
Access CLOUD TRACKER PRO
|
CLOUD TRACKER PRO Subscribers — Sign In |