Inspur NF5180M5 Flexible 1U Server Review

23 October 2019

Inspur Systems NF5180M5 CPU And Memory

Inspur Systems NF5180M5 CPU And Memory

The Inspur NF5180M5 is a 1U server that is designed for flexibility. Based on two Intel Xeon Scalable processors, the NF5180M5 is configurable to handle just about any scenario. The storage has a fairly broad range of options. One can fit up to five expansion cards, and two M.2 drives to further expandability. These properties make this an extremely popular platform for Inspur. In our review, we are going to show you around the system and run it through our testing suite.

Inspur Systems NF5180M5 Overview

For this 1U platform, we are going to start at the front and work around the exterior of the system before delving into the internals.

Inspur Systems NF5180M5 Exterior Overview

At the leading edge of the server, we can see that our test unit is a 10x 2.5″ hot-swap bay model. One can configure these bays to handle SATA, SAS, NVMe or a mix of drives.

Inspur Systems NF5180M5 Front With 2.5 In Hot-Swap Bays

Inspur Systems NF5180M5 Front With 2.5 In Hot Swap Bays

Inspur also offers a 4x 3.5″ disk option upfront. In this configuration, there are also two 2.5″ SSD bays above the 3.5″ drives about where the blue identification tab is on our unit.

Moving to the rear, we have Riser 1 with both PCIe 3.0 x16 (full height) and x8 (half-height) slots. Inspur has other options here including two 2.5″ rear hot-swap SSDs. Below that, we find VGA, two USB 3 and one management NIC. One feature we like, that is not on this server, is onboard 1GbE for features such as provisioning networks. Onboard 1GbE is becoming less popular. For example, HPE recently removed it from some of its servers, as has Dell EMC. There are plenty of expansion slots, but it is something to keep in mind.

Inspur Systems NF5180M5 Rear IO And PCIe 1

Inspur Systems NF5180M5 Rear IO And PCIe 1

Next, we find another low-profile PCIe x16 expansion card slot and two redundant 800W PSUs. You will also notice that we have one of the two OCP slots populated for these photos with the two SFP+ ports at the bottom of the chassis.

Inspur Systems NF5180M5 Rear IO And PCIe 2 With 800W PSUs

Inspur Systems NF5180M5 Rear IO And PCIe 2 With 800W PSUs

Overall, this is a flexible 1U platform from the exterior. Next, let us look at what is inside driving this flexibility.

Inspur Systems NF5180M5 Interior Overview

Taking a look at the system with its cover off, one can see a fairly standard layout with storage up front fans, CPUs and memory in the middle, and expansion slots in the rear. The CPU air guide was a fairly high-quality unit that is both rigid as well as labeled. Some smaller manufacturers do not label their servers well. Here, Inspur is showing the DIMMs and CPUs clearly labeled which can help if one needs to service the server in the field.

Inspur Systems NF5180M5 Internal Overview With Airflow Guide

Inspur Systems NF5180M5 Internal Overview With Airflow Guide

Removing the cover, one can see the CPUs and memory inside the unit. We are going to start with the front area just past the storage that we already covered.

Inspur Systems NF5180M5 Internal Overview

Inspur Systems NF5180M5 Internal Overview

At the heart of the Inspur NF5180M5 is an array of seven counter-rotating fans. These seven fans are tasked with cooling up to more than 1kW of components. One can also see that Inspur has mounting points and space in front of the fans for additional components. Our test system did not have these, but that can include a dual PCIe 3.0 M.2 NVMe SSD option.

Inspur Systems NF5180M5 Cooling And Mounting Points Behind Backplane

Inspur Systems NF5180M5 Cooling And Mounting Points Behind Backplane

Behind the fans, we can see two Intel Xeon Scalable processors with their 1U heatsinks that can support up to 205W TDP CPUs like the Intel Xeon Platinum 8280. Each CPU has a full set of 12 DIMMs which means one can use maximum RAM capacities and up to 24 DIMMs per system. Some 1U systems utilize only 6 or 8 DIMMs per CPU so Inspur is giving additional flexibility here by utilizing the maximum RAM capacities.

Inspur Systems NF5180M5 CPU And Memory

Inspur Systems NF5180M5 CPU And Memory

Moving to the expansion slot area, there are two main slot segments. The first is on the right side of the chassis when viewed from the front. Here we have one of the two OCP connectors for networking modules. Above that, one can see the PCIe x16 slot in a full-height expansion card space.

Inspur Systems NF5180M5 OCP CPU1 PCIe X16 Riser M2 Oculink

Inspur Systems NF5180M5 OCP CPU1 PCIe X16 Riser M2 Oculink

One can also see the two M.2 SATA headers and tool-less retention. One will also notice an array of NVMe Oculink headers. This is only the first set as there are additional headers that we will see on the other side of the motherboard.

Inspur Systems NF5180M5 Oculink TPM And USB Type A Header

Inspur Systems NF5180M5 Oculink TPM And USB Type A Header

One can even see an internal USB 3.0 Type-A header along the right edge just after the TPM.

One can see additional Oculink headers including those for the Intel Xeon SP Lewisburg PCH‘s SATA ports in the middle section of the motherboard.

Inspur Systems NF5180M5 Mid Section With Tesla T4 Installed

Inspur Systems NF5180M5 Mid Section With Tesla T4 Installed

The middle section can handle two PCIe cards. There is room for both a PCIe x8 low profile card as well as a PCIe x16 low profile card on the other riser. We wanted to note that the 1U risers are well designed and easy to service. You can see the NVIDIA Tesla T4 AI inferencing GPU installed in the riser.

Inspur Systems NF5180M5 1U Riser With NVIDIA Tesla T4

Inspur Systems NF5180M5 1U Riser With NVIDIA Tesla T4

Using Inspur’s tool-less rails, we were able to replace this NVIDIA Tesla T4 card with another card in under 90 seconds without removing the NF5180M5 from the rack.

Below that expansion riser is the second OCP slot. We love the dual OCP slot design of the NF5180M5 as that adds a lot of additional flexibility for the configuration. It also allows the system to maximize its use of PCIe expansion slots even in a 1U form factor.

Inspur Systems NF5180M5 OCP Slot With Intel X520 DA2

Inspur Systems NF5180M5 OCP Slot With Intel X520 DA2

As a quick note, you can see the server’s SD card slot just after the Intel OCP NIC.

Inspur Systems NF5180M5 Management

Inspur’s primary management is via IPMI and Redfish APIs. That is what most hyperscale and CSP customers will utilize to manage their systems. Inspur also includes a robust and customized web management platform with its management solution.

Inspur Web Management Interface Dashboard

Inspur Web Management Interface Dashboard

There are key features we would expect from any modern server. These include the ability to power cycle a system and remotely mount virtual media. Inspur also has a HTML5 iKVM solution that has these features included. Some other server vendors do not have fully-featured HTML5 iKVM including virtual media support as of this review being published.

Inspur Management HTML5 IKVM With Remote Media Mounted

Inspur Management HTML5 IKVM With Remote Media Mounted

Another feature worth noting is the ability to set BIOS settings via the web interface. That is a feature we see in solutions from top-tier vendors like Dell EMC, HPE, and Lenovo, but many vendors in the market do not have.

Inspur Management BIOS Settings

Inspur Management BIOS Settings

Another web management feature that differentiates Inspur from lower-tier OEMs is the ability to create virtual disks and manage storage directly from the web management interface. Some solutions allow administrators to do this via Redfish APIs, but not web management. This is another great inclusion here.

Inspur Management Storage Virtual Drive Creation

Inspur Management Storage Virtual Drive Creation

Inspur Systems NF5180M5 Test Configuration

For our review, we are using the same configuration we have been using for our 2nd Generation Intel Xeon Scalable CPU dual-socket reviews, we are using the following configuration:

1.System: Inspur Systems NF5180M5

2.CPUs: Intel Xeon Gold 5115

3.RAM: 12x 32GB DDR4-2933 ECC RDIMMs

4.Storage: 2x Intel DC S3520 480GB OS, 1x Samsung PM883 960GB SSD

5.Networking: Mellanox ConnectX-4 Lx 25GbE OCP, Intel X520-DA2 OCP

6.PCIe Accelerator: NVIDIA Tesla T4

A quick note here, we did not utilize the Intel Optane DCPMM here because we had standard chips. Using Intel Optane DCPMM even with two 128GB modules per CPU to stay well below the 1TB per CPU memory limit would have meant our memory would work at only DDR4-2666 speeds. Here is what the topology looks like with the dual Intel Xeon Gold 5115 CPUs installed:

Inspur NF5180M5 Topology

Inspur NF5180M5 Topology

You can see the system’s block diagram here:

Inspur-NF5180M5-Block-Diagram

Inspur NF5180M5 Block Diagram

Overall, we like the electrical layout of the system utilizing all 48x PCIe lanes per CPU. One of the more unique features is that Inspur adds an additional PCIe x8 link to the Lewisburg PCH.

Inspur Systems NF5180M5 CPU Performance

At STH, we have an extensive set of performance data from every major server CPU release. Running through our standard test suite generated over 1000 data points for each set of CPUs. We are cherry-picking a few to give some sense of CPU scaling.

As a quick note here, our test configuration came with dual Intel Xeon Gold 5115 CPUs. These are higher volume SKUs with 10 cores each. Since we are in the middle of the 2nd Gen Intel Xeon Scalable product cycle for this platform, we are going to use newer CPUs. Intel made enormous upgrades to the CPU performance in the lower and mid-range with this generation.

Python Linux 4.4.2 Kernel Compile Benchmark

This is one of the most requested benchmarks for STH over the past few years. The task was simple, we have a standard configuration file, the Linux 4.4.2 kernel from kernel.org, and make the standard auto-generated configuration utilizing every thread in the system. We are expressing results in terms of compiles per hour to make the results easier to read.

Inspur-Systems-NF5180M5-Linux-Kernel-Compile-Benchmarks-High-End-CPUs

Inspur Systems NF5180M5 Linux Kernel Compile Benchmarks High-End CPU Options

Many buyers will focus on the lower-end of the Xeon Gold line such as the Intel Xeon Gold 5220. Here Intel increased performance by 30% or more in its SKU stack by increasing clock speeds and core counts. You can see that in contrast to something like the Intel Xeon Gold 5120.

Here you can see an example of CPU scaling with the Inspur NF5280M5 using different Intel Xeon Gold and Platinum offerings. There is a fairly wide range. We are only showing dual-socket performance results, not single-socket results in this test. The server is capable of going up to the dual Intel Xeon Platinum 8280 configuration above and down to a single Intel Xeon Bronze CPU.

c-ray 1.1 Performance

We have been using c-ray for our performance testing for years now. It is a ray tracing benchmark that is extremely popular to show differences in processors under multi-threaded workloads. We are going to use our new Linux-Bench2 8K render to show differences.

Inspur-Systems-NF5180M5-C-Ray-8K-Benchmark-High-End-CPUs

Inspur Systems NF5180M5 C Ray 8K Benchmark High End CPUs

Here, the Intel Xeon Gold 6242 solution is providing a lot of performance even with fewer cores than the Gold 6230. Intel has a broad SKU stack that can optimize for more cores or higher frequencies at a given core count. The Intel Xeon Gold 6242, for example, is a 16-core SKU that fits will into Microsoft Windows Server 2019 licensing.

7-zip Compression Performance

7-zip is a widely used compression/ decompression program that works cross-platform. We started using the program during our early days with Windows testing. It is now part of Linux-Bench.

Inspur-Systems-NF5180M5-7zip-Compression-Benchmark

Inspur Systems NF5180M5 7zip Compression Benchmark

Here one can see the benefit of moving to a higher-end SKU like an Intel Xeon Platinum 8260. Intel’s robust Xeon Scalable SKU stack helps a user pinpoint the level of price/ performance they are looking for.

OpenSSL Performance

OpenSSL is widely used to secure communications between servers. This is an important protocol in many server stacks. We first look at our sign tests:Inspur-Systems-NF5180M5-OpenSSL-Sign-Benchmark

Inspur Systems NF5180M5 OpenSSL Sign Benchmark

Here are the verify results:

Inspur-Systems-NF5180M5-OpenSSL-Verify-Benchmark

Inspur Systems NF5180M5 OpenSSL Verify Benchmark

Here the Intel Xeon Platinum 8280 shines. The Inspur NF5180M5 can handle up to 205W TDP CPUs which means it can cover the full SKU stack. Some other 1U platforms are only capable of handling 165-180W maximum SKUs which limit their flexibility. In the case of the NF5180M5, this is not a concern.

Chess Benchmarking

Chess is an interesting use case since it has almost unlimited complexity. Over the years, we have received a number of requests to bring back chess benchmarking. We have been profiling systems and are ready to start sharing results:

Inspur-Systems-NF5180M5-Chess-Benchmark

Inspur Systems NF5180M5 Chess Benchmark

Here again we see nice scaling in the benchmark. We will note that Intel and Inspur also offer lower price, power, and performance SKUs like the Intel Xeon Silver 4214. There are a lot of configuration options but we are only showing some of the higher-end versions.

GROMACS STH Small AVX2 Enabled

In Linux-Bench2 we are using a “small” GROMACS test for single and dual-socket capable machines. Our GROMACS test will use the AVX-512 and AVX2 extensions if available.

Inspur-Systems-NF5180M5-GROMACS-STH-Small

Inspur Systems NF5180M5 GROMACS STH Small

Chips like the Platinum 8268 perform very well in AVX-512 workloads. This is a key feature that Intel has over the AMD EPYC 7002 series processors.

Inspur Systems NF5180M5 Storage Performance

Our Inspur Systems NF5280M5 was a 12x 3.5″ configuration. This configuration is focused on SATA/ SAS storage rather than NVMe. Inspur also offers higher-end NVMe storage and 2.5″ storage configurations that we were not able to test.

Inspur-Systems-NF5180M5-Storage-Performance

Inspur Systems NF5180M5 Storage Performance

Overall, we saw performance in-line with what we would expect on the storage side using these drives. The Inspur Systems NF5280M5 was able to handle our storage configurations without issue.

Inspur Systems NF5180M5 Networking Performance

We used the Inspur Systems NF5180M5 with a dual-port Mellanox ConnectX-4 Lx 25GbE OCP NIC as well as a dual Intel X520-DA2 OCP NIC. The server itself supports riser configurations for more, this is simply all we could put in our test system given the configuration we were using. At the same time, it represents what we expect will be a very popular solution for the server.

Inspur-NF5180M5-25GbE-Networking-Speed

Inspur NF5180M5 25GbE Networking Speed

25GbE infrastructure has become extremely popular. It provides a lower-latency connection than legacy 10GbE/ 40GbE. One also gets higher switch port bandwidth density than with the older standard. As a result, we are seeing many of our readers deploy 25GbE today. Next year, as PCIe Gen4 becomes more widespread, this may change. For now, a dual 25GbE connection is a good match for a PCIe Gen3 x8 slot which will make it a popular choice for Intel servers.

Inspur Systems NF5180M5 Power Consumption

Our Inspur NF5180M5 test server used a dual 800W power supply configuration. The PSUs are 80Plus Platinum level units.

Inspur-Systems-NF5180M5-Rear-IO-and-PCIe-2-with-800W-PSUs

Inspur Systems NF5180M5 Rear IO And PCIe 2 With 800W PSUs

We needed to use both PSUs in order to power our test configuration. Inspur offers higher-wattage PSUs, up to 1.3kW, which we would recommend if you are using heavy accelerators, storage, or Intel Xeon Platinum CPUs like the Platinum 8280 processors.

1.Idle: 0.1kW

2.STH CPU 70% Load: 0.3kW

3.100% Load AVX2 (GROMACS): 0.5kW

4.Maximum Observed: 0.6W

Note these results were taken using a 208V Schneider Electric / APC PDU at 17.7C and 72% RH. Our testing window shown here had a +/- 0.3C and +/- 2% RH variance.

STH Server Spider: Inspur NF5180M5

In the second half of 2018, we introduced the STH Server Spider as a quick reference to where a server system’s aptitude lies. Our goal is to start giving a quick visual depiction of the types of parameters that a server is targeted at.

STH-Server-Spider-Inspur-NF5180M5

STH Server Spider Inspur NF5180M5

Here we can see increased densities over the Inspur Systems NF5280M5 we reviewed previously. Much of this is because of the 1U form factor. It also is because of how flexible Inspur has made the platform with different storage configuration options and a large set of PCIe expansion card options for the platform.

Final Words

In the server industry, the 1U platform is still considered by many to be the standard. Deviating from the 1U platform to add more compute density (e.g. 2U 4-node systems) or more storage/ expansion card capabilities means that other systems are compared to the yardstick that is the 1U server.

Here, the Inspur NF5180M5 does not disappoint. It is very well laid out. There are a huge number of storage configuration options including up to four M.2 SSDs without even using an expansion slot. Three expansion slots plus two OCP networking ports mean that one can add many new features such as a NVIDIA Tesla T4 for AI inferencing. Further, the NF5180M5 supports Intel’s entire LGA3647 SKU stack making it able to service low power and even high-performance scenarios.

One item we would have liked to have seen is onboard dual 1GbE networking as we know many of our users use 1GbE for provisioning. With dual OCP networking slots, we understand this omission as one can configure an adapter, even a 10GbE adapter in one slot for provisioning while the other is used for data.

Prior to starting this review, we were told that the Inspur NF5180M5 is a very popular model in the US and EMEA markets. The flexibility this system offers is a clear reason for that.