Tesla has successfully completed the expansion of its Gigafactory in Texas to house a state-of-the-art cluster of supercomputers for artificial intelligence. The project includes the installation of 50 thousand NVIDIA graphic processors along with Tesla’s own AI equipment, with the goal of advancing autopilot technology.
Elon Musk has announced that the supercomputer based at the Gigafactory will initially require 130 megawatts of energy, with the potential to increase to 500 megawatts.
Additionally, Musk is involved in another multi-billion dollar supercomputer project for his other company, XII. This new supercomputer, set to be one of the world’s largest AI clusters, will feature 100 thousand NVIDIA H100 graphic processors to support the next version of Grokai, Chat Bota AI for premium subscribers of the X.
The supercomputer servers will be supplied by Dell and Supermicro, as confirmed by Dell’s Chairman and CEO Michael Dell. Plans are in place to upgrade the system to include up to 300 thousand NVIDIA B200 graphic processors by next summer.
Both supercomputer projects will utilize Supermicro’s cooling solutions, with CEO Charles Liang commending Musk’s decision to incorporate their liquid cooling technology.
Liquid cooling is a groundbreaking method for thermal regulation in data centers, surpassing traditional air systems. According to Supermicro, their direct liquid cooling system can reduce energy consumption by an impressive 89%. Liang aims to increase the use of this technology in data centers from less than 1% to over 30% within the year. The integration of liquid cooling in Musk’s large-scale projects could potentially revolutionize data center efficiency and environmental sustainability globally.
Both supercomputer projects are currently in the implementation phase, with the XII project taking priority. Musk has reallocated NVIDIA graphic processors meant for Tesla to the XII project, causing delays in the Tesla supercomputer construction. The extent of these delays is yet to be determined.