
Microsoft Azure has reached a significant milestone by integrating its first NVIDIA Blackwell “GB200 NVL72″system to enhance OpenAI functionalities, representing a pivotal development in AI training and optimization.
Microsoft Expands OpenAI’s Computational Capabilities with NVIDIA Blackwell Technology
The adoption of the cutting-edge NVIDIA GB200 NVL72 “Blackwell”system marks a substantial leap forward for OpenAI’s performance and capabilities. NVIDIA’s advanced GPUs have consistently supported OpenAI’s AI models. However, the partnership with Microsoft Azure aims to maximize the effectiveness of these AI models through more intensive and rigorous training processes.
NVIDIA Blackwell is here!
Today, we’re excited to bring the highly-anticipated @nvidia Blackwell GPUs to Google Cloud with the preview of A4 VMs, powered by NVIDIA HGX B200.
Learn more → https://t.co/N1oqZMzJ8M pic.twitter.com/iBTLqUQSVg
— Google Cloud Tech (@GoogleCloudTech) January 31, 2025
In a recent X post, Sam Altman, the CEO of OpenAI, confirmed the successful implementation of the first complete set of eight racks housing the GB200 NVL72 systems on Microsoft Azure. His acknowledgment of Microsoft’s CEO Satya Nadella and NVIDIA’s CEO Jensen Huang for this innovation underscores the collaborative strength of these leading technology firms. This partnership positions Microsoft as a formidable player among cloud service providers for AI operations.
first full 8-rack GB200 NVL72 now running in azure for openai—thank you @satyanadella and jensen!
— Sam Altman (@sama) January 31, 2025
Unmatched Computational Power of the Blackwell System
The sophisticated NVIDIA Blackwell GB200 NVL72 system boasts an impressive architecture with 36 Grace CPUs and 72 Blackwell GPUs integrated into each rack. When fully deployed across eight racks, the system totals 288 Grace CPUs and an astounding 576 Blackwell GPUs. This setup generates extraordinary computational capabilities; a single rack can deliver up to 6, 480 TFLOPS of FP32 processing power and 3, 240 TFLOPS of FP64 performance, scaling this figure to nearly 51, 840 TFLOPS of FP32 and 25, 920 TFLOPS of FP64 across the entire system.
Such remarkable processing performance enables the Azure OpenAI service to cater effectively to enterprises aiming to scale operations efficiently. The NVIDIA “Blackwell”GB200 NVL72 system is tailored to handle extensive generative AI tasks, featuring a high memory bandwidth of up to 576 TB/s alongside robust parallel processing capabilities.

OpenAI’s current GPU roster includes various NVIDIA models such as the V100s, H100s, and A100s; however, the Blackwell B200 represents the pinnacle of GPU technology, offering substantial performance enhancements over previous iterations. With Microsoft’s significant investment of nearly $14 billion into OpenAI, it’s evident that they are eager to lead in providing the most advanced computing solutions for their clients.
Leave a Reply ▼