Foxconn, recognized as the leading contract electronics manufacturer globally, has unveiled its ambitious plan to construct a massive production facility dedicated to Nvidia’s GB200 chips. This announcement was made by Benjamin Ting, the senior vice president of Foxconn, during Nvidia’s annual tech showcase in Taipei, Taiwan. However, specific details regarding the location of this new facility remain undisclosed.
This initiative aligns with the substantial demand for Nvidia’s forthcoming Blackwell platform, engineered particularly for computationally intensive tasks, such as the training of AI models.
While Foxconn is predominantly known for its partnership with Apple, it is broadening its manufacturing expertise to include the production of other electronics, particularly GPUs. The rise in demand for these components is driven by numerous AI startups that require robust computational capabilities for model training. To meet these needs, many corporations are already expanding their data center capacities.
Following the training and deployment of an AI model, ongoing computing resources are essential to manage the growing data produced by various applications. To facilitate these requirements, Nvidia has introduced its Blackwell platform and the GB200 GPU, with major players like Microsoft having placed substantial pre-orders.
The Nvidia GB200 represents a powerful superchip, integrating two Blackwell B200 GPUs alongside a Grace CPU. Each GPU is fitted with 192GB of HBM3e memory, while the CPU is linked to 512GB of LPDDR5 memory, culminating in a total unified memory capacity of 896GB. Communication between these components is facilitated by NVLINK, allowing for high-speed data transfer between the GPUs and CPU. Moreover, the GB200 can function as part of a computer rack within the GB200 NVL72 platform, scalable to accommodate 512 GPUs within a single NVLINK domain, greatly boosting performance for extensive AI training and inference endeavors.
GPUs such as the GB200 are integral to high-performance computing (HPC) systems, meticulously designed to efficiently tackle intricate calculations and handle large datasets. Their architecture promotes parallel processing across multiple interconnected processors, rendering them ideal for data-heavy applications.
Young Liu, the chairman of Foxconn, affirmed at the event that the company’s supply chain is well-positioned for the AI advancement. He highlighted Foxconn’s superior manufacturing capabilities, which are equipped with critical technologies like liquid cooling and heat dissipation systems essential for producing Nvidia’s GB200 configurations.
Foxconn and Nvidia are collaborating to construct Taiwan’s largest supercomputer, named the Hon Hai Kaohsiung Super Computing Center. This supercomputer will utilize the same Blackwell architecture and incorporate the GB200 NVL72 platform, consisting of 64 racks and 4,608 Tensor Core GPUs. It is anticipated to achieve a total computing performance surpassing 90 exaflops.
The construction of this supercomputer is already underway in Kaohsiung, Taiwan, with the first phase expected to become operational by mid-2025 and full deployment aimed for 2026.
Via Reuters
Leave a Reply