Tesla's in-house supercomputer is something special - but the next will be even better

Tesla's in-house supercomputer has received an additional 1,600 GPUs, a 28% increase on the figure quoted a year ago.

Tesla Engineering Manager Tim Zaman claims this would place the machine 7th in the world by GPU count.

The machine now features a total of 7,360 Nvidia A100 GPUs, which are built specifically for data center servers, but utilize the same architecture as the company's top-of-the-line GeForce RTX 30-series cards. 

Telsa supercomputer upgrade

It's likely Tesla needs all the processing power it can get right now. The company is currently working on 'neural nets', which are used to process the vast quantities of video data that the company's cars collect.

The latest upgrade may be just the start of Tesla's high-performance computing (HPC) ambitions.

In June 2020, Elon Musk said "Tesla is developing a neural net training computer called Dojo to process truly vast amounts of video data", explaining the planned machine would achieve a performance of over 1 exaFLOPs, which represents one quintillion floating-point operations per second, or 1,000 petaFLOPs.

Performance of over 1 exaFLOPs would place the machine among the most powerful supercomputers worldwide, as only a few current supercomputers have officially exceeded the exascale barrier, including The Frontier supercomputer at the Oak Ridge National Laboratory in Tennessee, United States. 

You might even be able to get a job building the new computer. Musk asked his Twitter followers to "consider joining our AI or computer/chip teams if this sounds interesting".

Dojo won't be reliant on Nvidia hardware, however. The planned machine is set to be powered by Tesla's new D1 Dojo Chip, which the carmaker said could have specifications of up to 362 TFLOPs at its AI Day event.

  • Want to run your own AI research in the cloud? Check out our guide to the best cloud hosting

Via Tom's Hardware



from TechRadar - All the latest technology news visit

Comments