Lambda Labs, in collaboration with Pegatron’s server division, has embarked on an exciting new deployment involving Nvidia’s GB200 NVL72 server rack. The deployment is expected to power Lambda’s advanced AI compute platform, marking a significant milestone for both companies. The Nvidia GB200 NVL72 is known for its high-performance capabilities, and this partnership is poised to enhance Lambda’s ability to deliver cutting-edge cloud and colocation services.
Details of the Partnership and Deployment
Lambda Labs and Pegatron have recently shared updates about their partnership. Pegatron SVR announced that the GB200 NVL72 rack had arrived at its data center and is in the process of being prepared for deployment. This marks the beginning of a crucial phase in the partnership, with more specific deployment details expected to be released soon. The partnership emphasizes Pegatron’s role in providing the hardware infrastructure needed to support Lambda Labs’ AI initiatives.
The Nvidia GB200 NVL72: Powering AI Compute Platforms
The Nvidia GB200 NVL72, a liquid-cooled server rack, is equipped with state-of-the-art hardware designed to accelerate artificial intelligence workloads. The configuration includes 72 Nvidia GB200 GPUs, 36 Nvidia Grace CPUs, and nine NVLink switch trays, each with two NVLink switches. Nvidia’s engineers have designed this system to function as a unified GPU platform, significantly boosting performance for high-demand AI applications.
In March 2024, Nvidia introduced the GB200 NVL72, touting its high-performance capabilities in AI compute environments. By using these GPUs and CPUs, the system is designed to push the boundaries of computational power, allowing businesses and cloud providers to scale their AI capabilities.
Overcoming Initial Cooling Challenges
In November 2024, reports emerged about potential overheating issues with the NVL72 racks. These challenges were traced back to the custom design of the racks. Nvidia asked its suppliers to make several adjustments to address these issues, ensuring the racks could handle high-performance workloads without overheating. Despite these early obstacles, it is believed that the cooling issues have now been resolved, and customers like CoreWeave have already started receiving NVL72 racks.
Lambda Labs’ Role in the AI Cloud Market
Founded in 2012, Lambda Labs has carved out a niche in the AI and GPU-based cloud computing market. The company provides colocation space in data centers located in San Francisco, California, and Allen, Texas, where it rents out powerful GPU-based cloud compute services. Additionally, Lambda Labs offers colocation services and sells GPU desktops, catering to businesses and developers working with AI models.
The company has been actively pursuing funding to expand its GPU infrastructure, with reports in July 2024 stating that Lambda Labs was seeking $800 million in funding. This capital will be used to acquire additional Nvidia GPUs and associated networking infrastructure, positioning Lambda Labs to continue supporting large-scale AI operations in the cloud.
Nvidia’s GB200 NVL72 Rack: Pricing and Market Implications
The Nvidia GB200 NVL72 rack represents a significant investment for businesses looking to push the boundaries of AI computing. With an estimated cost of $3 million per rack, it’s a high-ticket item, although the price is reflective of the advanced technology and performance it provides. Each GB200 Superchip, which powers the system, costs between $60,000 and $70,000, further underscoring the premium nature of this hardware. For organizations like Lambda Labs, this investment is expected to drive substantial improvements in AI capabilities and performance.
Frequently Asked Questions
What is the Nvidia GB200 NVL72 rack used for?
The Nvidia GB200 NVL72 rack is designed to power AI compute platforms, featuring high-performance GPUs and CPUs that enhance the capabilities of artificial intelligence workloads.
How does the GB200 NVL72 improve AI performance?
The GB200 NVL72 uses 72 Nvidia GB200 GPUs and 36 Nvidia Grace CPUs to create a unified GPU platform. This architecture boosts computational power, enabling faster and more efficient AI processing.
What cooling technology is used in the Nvidia GB200 NVL72 rack?
The Nvidia GB200 NVL72 rack is liquid-cooled, designed to manage the substantial heat generated by high-performance GPUs and CPUs in AI applications.
What were the issues with the Nvidia GB200 NVL72 racks?
Initially, Nvidia faced overheating challenges with the NVL72 racks due to their custom design. These issues have since been addressed, ensuring better reliability and performance for customers.
How much does the Nvidia GB200 NVL72 rack cost?
The estimated cost of an Nvidia GB200 NVL72 rack is around $3 million, with each Nvidia GB200 Superchip priced between $60,000 and $70,000.