News

"Lenovo's New Servers to Feature NVIDIA's Blackwell"

At the 2024Tech World, Lenovo Group Chairman and CEO Yang Yuanqing and NVIDIA CEO Huang Renxun jointly announced that the SC777 model in Lenovo's ThinkSystem server series will be equipped with NVIDIA Blackwell Al accelerator cards (GPUs).

On March 18th of this year, at the GTC (GPU Technology Conference) 2024, Lenovo Group and NVIDIA announced a collaboration to launch a new hybrid artificial intelligence solution, helping enterprises and cloud providers gain the key accelerated computing capabilities needed for success in the AI era, turning AI from concept to reality.

At the same time, in terms of large-scale efficient enhancement of artificial intelligence workloads, Lenovo released an expansion of the ThinkSystem AI product portfolio at that time, including two 8-way NVIDIA GPU systems.

The Lenovo ThinkSystem server series is a data center infrastructure product under Lenovo, including various models, mainly targeting different enterprise-level applications and services. The series currently known models are divided into SC and SR.

Advertisement

Among them, SR has launched various types of products, but SC currently only has SC777, with key features including support for large-scale computing clusters, excellent scalability and configurability, making it suitable for various enterprise scenarios.

From high-performance computing in data centers to edge computing scenarios, the flexible architecture and excellent energy efficiency ratio of Lenovo ThinkSystem SC777 enable SC777 to adapt to a variety of dynamically changing business needs. In addition, the security design of this server is also excellent.

The ThinkSystem SC777 server can quickly run tasks such as complex AI training, image processing, and video analysis, and through highly flexible configuration, it can quickly adapt to different workload requirements.

Blackwell is a new generation of AI chips and supercomputing platforms launched by NVIDIA, named after American mathematician David Harold Blackwell. The GPU of this architecture has 208 billion transistors and is manufactured using a specially customized TSMC 4NP process. All Blackwell products use double photolithography limit size wafers, connected into a unified GPU through 10 TB/s inter-wafer interconnect technology.

The second-generation Transformer engine combines customized Blackwell Tensor Core technology with NVIDIA TensorRT-LLM and NeMo framework innovations, accelerating the inference and training of large language models (LLM) and expert hybrid models (MoE).

To effectively assist MoE model inference, Blackwell Tensor Core has added new precisions (including new community-defined micro-scaling formats), providing higher accuracy and easy replacement of larger precisions.The Blackwell Transformer engine utilizes a fine-grained scaling technique known as microtensor scaling to optimize performance and accuracy, supporting 4-bit floating-point (FP4) AI. This doubles the performance and size of the next-generation models that memory can support while maintaining high precision.

Blackwell is equipped with NVIDIA's confidential computing technology, which can protect sensitive data and AI models from unauthorized access through hardware-based robust security. It is also the industry's first GPU with Trusted Execution Environment (TEE) I/O capabilities, not only providing a high-performance confidential computing solution in conjunction with a host with TEE-I/O capabilities but also offering real-time protection through NVIDIA's NVLink technology.

Overall, the Blackwell GPU is the core platform for NVIDIA's next-generation accelerated computing and generative artificial intelligence (AI), featuring a brand-new architectural design with six transformative accelerated computing technologies.

These technologies will drive breakthroughs in fields such as data processing, engineering simulation, electronic design automation, computer-aided drug design, quantum computing, and generative AI. It is particularly worth mentioning that its AI inference performance is 30 times higher than the previous generation, while the energy consumption is reduced by 25 times, which is a significant advancement for the AI and computing fields.

Leave a reply

Your email address will not be published. Required fields are marked *