TSMC has begun production of Tesla's Next Generation Dojo Chip.
The core of Tesla’s designed Dojo supercomputer lies in its training modules, wherein 25 D1 chips are arranged in a 5×5 matrix. Manufactured using a 7-nanometer process node, each chip can accommodate 50 billion transistors, providing 362 TFLOPs of processing power.
FYI, a teraflop (TFLOP) means the ability to process one trillion floating point operations per second.
According to TSMC, the approach for Tesla’s new product differs from the wafer-scale systems. Essentially, the Dojo training modules (a 5×5 grid of pretested processors) are placed on a single carrier wafer, with all empty spaces filled in. Subsequently, TSMC’s integrated fan-out (InFO) technology is utilized to apply a layer of high-density interconnects. This process significantly enhances the inter-chip data bandwidth, enabling them to function like a single large chip.
My old company, Novellus Systems (acquired by my other company Lam Research), pioneered copper interconnects between chip layers. Copper was not believed to be a viable substance at these geometries. Why is this important? Because to realize AI, huge amounts of data must be consumed. It is a software algorithm issue that requires a hardware solution.
If you want a feel for the enormity of AI, Dojo, Nvidia, etc, here is a recent article on the new Dojo chip.
It's big.
The core of Tesla’s designed Dojo supercomputer lies in its training modules, wherein 25 D1 chips are arranged in a 5×5 matrix. Manufactured using a 7-nanometer process node, each chip can accommodate 50 billion transistors, providing 362 TFLOPs of processing power.
FYI, a teraflop (TFLOP) means the ability to process one trillion floating point operations per second.
According to TSMC, the approach for Tesla’s new product differs from the wafer-scale systems. Essentially, the Dojo training modules (a 5×5 grid of pretested processors) are placed on a single carrier wafer, with all empty spaces filled in. Subsequently, TSMC’s integrated fan-out (InFO) technology is utilized to apply a layer of high-density interconnects. This process significantly enhances the inter-chip data bandwidth, enabling them to function like a single large chip.
My old company, Novellus Systems (acquired by my other company Lam Research), pioneered copper interconnects between chip layers. Copper was not believed to be a viable substance at these geometries. Why is this important? Because to realize AI, huge amounts of data must be consumed. It is a software algorithm issue that requires a hardware solution.
If you want a feel for the enormity of AI, Dojo, Nvidia, etc, here is a recent article on the new Dojo chip.
It's big.