Inspur’s AI Triumphs with 19 Global Records - 2020 Year in Review

20 January 2021

2020 was a year of significant achievements in AI for Inspur. The company has always been committed to enhancing its capabilities and delivering best-in-class AI products for clients, and over the past year, it has made good on this commitment. Inspur took swift action in incorporating the latest technologies into its state-of-the-art products, topped worldwide benchmarks, and stood out from the crowd in renowned AI competitions and challenges. Moreover, Inspur has delivered a full-stack AI portfolio that includes such cutting-edge products as AI computing platforms, AI resource platforms, and algorithm toolkits to meet clients’ increasingly diverse and high-tech needs. The following is an overview of Inspur’s AI footprint in 2020, and the benefits that these technologies can bring to clients.

Swift and Extensive Upgrades Secures Market Leadership

May 2020, Inspur Releases 5 New AI Servers Powered by NVIDIA A100 Tensor Core GPUs

The instant the NVIDIA Ampere architecture was unveiled at GTC20, Inspur jumped into action to implement it across its entire suite of AI servers with 8 or 16 A100 GPU support. The result of this meticulous upgrade was immediately noticeable, with remarkable AI computing performance of up to 40 PetaOPS and one the most comprehensive product portfolios in the industry

June 2020, Inspur Launches AI Servers to Support the Latest NVIDIA A100 PCIe Gen 4

Inspur took a leading position in the market with the release of its NF5468M6 and NF5468A5 servers. These AI servers can accommodate 8 pcs of double-width NVIDIA A100 PCIe GPUs in a 4U chassis. Both servers support the latest PCIe Gen 4 of 64GB/s bi-directional bandwidth, achieving superior AI computing performance.

July 2020, Inspur Releases AIStation Inference Platform for Compute Power Scheduling in Enterprise AI Production Environments

The AIStation inference platform increases resource utilization from 40% to 80% by enabling agile deployment of inference service resources. It also vastly reduces model deployment time from two to three days to just a few minutes by supporting unified scheduling of multi-source models. The AIStation provides full support for the two main training and inference scenarios, and enables efficient one-stop-shop delivery of the entire process, from model development to training, deployment, testing, release, and service.

July 2020, Inspur Releases New Server Optimized for Mobile Liquid Cooling Cluster

Inspur released a 2U4N high-density liquid-cooling server, the i24M5-LC, deployable in PUE<1.2 data centers. Meanwhile, it released a mobile rack-mounted Coolant Distribution Unit (CDU), which can be connected to the i24M5-LC with quick release connectors for easy deployment of a complete liquid cooling system, providing clients with an efficient and convenient mobile liquid cooling cluster solution.

October 2020, Inspur Unveils Cloud SmartNIC Solution Based on NVIDIA DPU

At GTC Fall 2020, Inspur unveiled its Cloud SmartNIC solution based on the NVIDIA BlueField data processing unit (DPU). The Inspur Cloud SmartNIC solution realizes deep integration of the Inspur server with NVIDIA DPU. This pairing combines the capabilities of embedded processing, SmartNIC networking, and a high-performance PCIe 4.0 host interface, which enables offload functions such as traffic management, storage virtualization, and security isolation, significantly freeing up CPU computing resources.

November 2020, Inspur’s AI Servers Upgraded to Support the Latest NVIDIA A100 80GB GPU

Inspur’s AI servers NF5488A5 and NF5488M5-D, which support the latest NVIDIA A100 80G GPU, began mass production in November 2020, making them available worldwide. The servers deliver a 15% performance increase in AI model training, with tens of billions of parameters.

Extreme Product Design Sets New Global Benchmarks

October 2020, Inspur’s NF5488A5 Breaks AI Server Performance Record for Training and Inference MLPerf Benchmarks

Inspur’s NF5488A5 set 18 new performance records for the MLPerf v0.7 AI inference benchmark. It also came top in the single server performance category for the MLPerf Training ResNet50 benchmark. Launched in May 2020, the NF5488A5 AI server is powered by 8 NVIDIA A100 GPUs that are fully interconnected with the third generation NVLink, as well as two AMD CPUs supporting PCIe 4.0. It has been shortlisted for the CRN 10 Hottest Enterprise Servers of 2020.

Winning Recognition in World-renowned AI Competitions and Reports

April 2020, Inspur Takes Third Place in NeurIPS AutoDL 2019-2020

The Inspur team took third place in the AutoDL (Auto Deep Learning) 2019-2020 finals held by NeurIPS, a top AI academic conference. The full-process AutoDL solution developed by Inspur achieved an average accuracy improvement of nearly 20% over the baseline, an average data reading efficiency improvement of 22%, and completed the model establishment, search and, generation processes in less than half an hour.

June 2020, Inspur Comes Top in VizWiz-VQA Challenge (Visual Question Answering Challenge) of CVPR 2020

July 2020, Inspur is Recognized by The Forrester Wave™ as a Leading Solutions Provider of Predictive Analytics and Machine Learning

The Forrester Wave™: Predictive Analytics and Machine Learning in China report, which reviewed 35 PAML providers, recognized Inspur as a leading company among the large established players, along with Alibaba, Baidu, and other technology giants. Inspur’s solutions in this field help to automate machine learning model development and boost AI productivity.

Another year has begun. Riding the wave of its considerable achievements in 2020, Inspur will continue to demonstrate its leadership in AI by providing full-stack AI solutions to meet the ever-growing and complex demands of clients.

×