AI data centers are becoming ‘mind-blowingly large’


large data center

Vladimir_Timofeev/Getty Images

The building of more powerful data centers for artificial intelligence, stuffed with more and more GPU chips, is driving data centers to enormous size, according to the chief executive of Ciena, which makes fiber-optic networking equipment purchased by cloud computing vendors to connect their data centers together. 

“Some of these large data centers are just mind-blowingly large, they are enormous,” says Gary Smith, CEO of Hannover, Maryland-based Ciena.

Also: OpenAI’s o3 isn’t AGI yet but it just did something no other AI has done

“You have data centers that are over two kilometers,” says Smith, more than 1.24 miles. Some of the newer data centers are multi-story he notes, creating a second dimension of distance on top of horizontal sprawl. 

Smith made the remarks as part of an interview with the financial newsletter The Technology Letter last week

Even as cloud data centers grow, corporate campuses are straining to support clusters of GPUs as their size increases, Smith said.

“These campuses are getting bigger and longer,” he says. The campus, which comprises many buildings, is “blurring the line between what used to be a wide-area network and what’s inside the data center.”

Also: AWS says its AI data centers just got even more efficient – here’s how

“You’re beginning to see these campuses get to quite decent distances, and that is putting massive strain on the direct-connect technology.” 

ciena-2024-garysmith-lrg

Smith expects to start selling fiber-optic equipment in coming years that is similar to what is in long-haul telecom networks but tweaked to connect GPUs inside the data center.

Ciena

A direct-connect device is a networking device that is purpose-built to let GPUs talk to one other, such as Nvidia’s “NVLink” networking products.

Smith’s remarks echo comments by others serving the AI industry, such as Thomas Graham, co-founder of chip startup Lightmatter, who last month said at a Bloomberg Intelligence conference that there are at least a dozen new AI data centers planned or in construction now that require a gigawatt of power to run. 

“Just for context, New York City pulls five gigawatts of power on an average day, so, multiple NYCs.” By 2026, Graham said, it’s expected the world’s AI processing will require 40 gigawatts of power “specifically for AI data centers, so eight NYCs.”

Also: Global AI computing will use ‘multiple NYCs’ worth of power by 2026, says founder

Smith said that the strain placed on Nvidia’s direct-connect technology means that traditional fiber-optic links, heretofore reserved for long-distance telecom networks, will start to be deployed inside cloud data centers in coming years.

“Given the speed of the GPUs, and the distances that are now going on in these data centers, we think there’s an intersect point for that [fiber optics] technology, and that’s what we’re focused on,” Smith told the newsletter. 





Source link