Inspur Systems NF8260M5 4P Intel Xeon OCP Server Review

07 August 2020

Reprinted from serverthehome.com

Inspur-Systems-NF8260M5-Internal-Overview

Inspur Systems NF8260M5 Internal Overview

The Inspur Systems NF8260M5 is a four-socket Intel Xeon server with a unique twist. Inspur and Intel jointly developed the “Crane Mountain” platform specifically for the cloud service provider (CSP) market. While many of the four-socket systems we review are developed for enterprises, this is the first 4P server we have seen specifically designed for CSPs. Inspur and Intel are going a step further and contributing this design to the OCP community so others can benefit from the design.

Our introduction to the Inspur Systems NF8260M5 was through Intel’s Jason Waxman who held a system up on stage at OCP Summit 2019 during Intel’s keynote. He noted at the time that this would support 2nd Gen Intel Xeon Scalable processors and Intel Optane DC Persistent Memory.

Jason-Waxman-4-socket-Cascade-Lake-Server

Jason Waxman 4 Socket Cascade Lake Server

After the show, we immediately got one in our test lab and our review here today shows what this OCP contribution has to offer.

Inspur Systems NF8260M5 Overview

At the top-end packaging, this is a 2U server with 24x 2.5″ hot-swap bays up front. We are going to discuss storage in a bit, however, this form factor is a big deal. Previous generation Intel Xeon E7 series quad socket servers were often 4U designs. Newer quad socket designs like the Inspur Systems NF8260M5 are 2U, effectively doubling socket density for these scale up platforms.

Inspur-Systems-NF8260M5-Front

Inspur Systems NF8260M5 Front

Inside, we find the heard of the system which are four Intel Xeon Scalable CPUs. These CPUs can be either first or second generation Intel Xeon Scalable processors. Our test system, we utilized a range of second generation options. Higher-end SKUs top out at 28 cores and 56 threads meaning that the entire system can handle 112 cores and 224 threads.

Inspur-Systems-NF8260M5-Internal-Overview

Inspur Systems NF8260M5 Internal Overview

Each CPU is flanked by a maximum set of twelve DIMM slots, making 48 DIMM slots total. One can utilize 128GB LRDIMM for up to 6TB of memory or reaching into 12TB using 256GB LRDIMMs that are coming on the market now. One can also utilize Intel Optane DC Persistent Memory Modules or Optane DCPMM.

Intel-Optane-DCPMM-6TB-Capacity

Intel Optane DCPMM 6TB Capacity

These modules combine persistent memory attributes, like NVMe SSDs, with higher speeds and lower latency from being co-located in the DRAM channels. Our system, for example, has 24x 32GB DDR4 RDIMMs along with 24x 256GB Optane DCPMMs for a combined 6.75TB of memory in the system. That is absolutely massive.

Inspur-Systems-NF8260M5-DDR4-and-Optane-DCPMM-Support

Inspur Systems NF8260M5 DDR4 And Optane DCPMM Support

In the middle of the chassis, we find six hot-swap fans cooling this massive system.

Inspur-Systems-NF8260M5-Hot-Swap-Fan

Inspur Systems NF8260M5 Hot Swap Fan

The rear I/O is handled mainly via risers. there are three sets of risers across the chassis. The Inspur Systems NF8260M5 also has an OCP mezzanine slot for networking without using the risers. Our single riser configuration was enough to handle the full storage configuration for our system.

Inspur-Systems-NF8260M5-Middle-riser-populated

Inspur Systems NF8260M5 Middle Riser Populated

Storage is segmented into three different sets of eight hot-swap bays. There are three PCBs each servicing a set of eight bays. Using this method, the NF8260M5 can utilize NVMe, SAS/SATA, or combined backplanes depending on configuration needs.

Inspur-Systems-NF8260M5-Storage-Backplane

Inspur Systems NF8260M5 Storage Backplane

Power is supplied via two power supplies. The PSUs in our test unit were 800W units which is not redundant for our configuration. Inspur has 1.3kW, 1.6kW, and 2kW versions which we would recommend if you were configuring a similar system.

Inspur-Systems-NF8260M5-800W-PSUs

Inspur Systems NF8260M5 800W PSUs

Rear I/O without expansion risers and the OCP mezzanine slot is limited to a management RJ-45 network port, two USB 3.0 ports, and legacy VGA plus serial ports.

Inspur-Systems-NF8260M5-Rear

Inspur Systems NF8260M5 Rear

On a quick usability note, the Inspur NF8260M5 was serviceable on its rails. Some lower-end units require the chassis to be completely removed for service.

Inspur-Systems-NF8260M5-Service-Out-of-Rack

Inspur Systems NF8260M5 Service Out Of Rack

The top cover had a nice latching mechanism and good documentation of the system’s main features within. This is a feature we expect now from top tier servers.

Inspur-Systems-NF8260M5-Cover-Diagram

Inspur Systems NF8260M5 Cover Diagram

Overall, this streamlined hardware design worked well for us in testing.

Next, we are going to take a look at the Inspur Systems NF8260M5 test configuration and management, before continuing with our review.

Inspur Systems NF8260M5 Test Configuration

Our Inspur Systems NF8260M5 test configuration was robust and similar to what we expect to see at many cloud service providers using this type of system:

  • System: Inspur Systems NF8260M5
  • CPU: 4x Intel Xeon Platinum 8276L, 4x Xeon Platinum 8260, 4x Xeon Gold 6242
  • RAM: 768GB (24x 32GB) DDR4-2933 at 2666MHz
  • Optane DCPMM: 6TB (24x 256GB) Intel Optane DC Persistent Memory Modules
  • NVMe SSD: 6x Intel DC P4510 2.0TB
  • RAID SATA SSD: 4x Samsung PM863a 480GB
  • HDDs: 12x Seagate Exos 2TB 7E2000, 2x Seagate Exos 600GB 15E900
  • 25GbE NIC: Mellanox ConnectX-4 Lx dual-port 25GbE OCP
  • Riser: Single riser configuration

Storage included a fairly broad range of drives. Here are the twenty-four drives outside of the chassis, ready for installation.

Inspur-Systems-NF8260M5-test-configuration-storage-out-of-chassis

Inspur Systems NF8260M5 Test Configuration Storage Out Of Chassis

One can see we had drives represented from three different vendors. We had NVMe as well as SAS3 and SATA III drives. We also had both SSDs and HDDs installed which shows the flexibility of the platform.

Inspur Systems Inspur NF8260M5 Management

Inspur’s primary management is via IPMI and Redfish APIs. That is what most hyperscale and CSP customers will utilize to manage their systems. Inspur also includes a robust and customized web management platform with its management solution.

Inspur-Web-Management-Interface-Dashboard

Inspur Web Management Interface Dashboard

There are key features we would expect from any modern server. These include the ability to power cycle a system and remotely mount virtual media. Inspur also has a HTML5 iKVM solution that has these features included. Some other server vendors do not have fully featured HTML5 iKVM including virtual media support as of this review being published.

Inspur-Management-HTML5-iKVM-with-Remote-Media-Mounted

Inspur Management HTML5 IKVM With Remote Media Mounted

Another feature worth noting is the ability to set BIOS settings via the web interface. That is a feature we see in solutions from top-tier vendors like Dell EMC, HPE, and Lenovo, but many vendors in the market do not have.

Inspur-Management-BIOS-Settings

Inspur Management BIOS Settings

Another web management feature that differentiates Inspur from lower-tier OEMs is the ability to create virtual disks and manage storage directly from the web management interface. Some solutions allow administrators to do this via Redfish APIs, but not web management. This is another great inclusion here.

Inspur-Management-Storage-Virtual-Drive-Creation

Inspur Management Storage Virtual Drive Creation

Based on comments in our previous articles, many of our readers have not used an Inspur Systems server and therefore have not seen the management interface.

It is certainly not the most entertaining subject, however, if you are considering these systems, you may want to know what the web management interface is on each machine and that tour can be helpful.

Next, we are going to take a look at the Inspur Systems NF8260M5 CPU performance.

Inspur NF8260M5 CPU Performance

At STH, we have an extensive set of performance data from every major server CPU release. Running through our standard test suite generated over 1000 data points for each set of CPUs. We are cherry picking a few to give some sense of CPU scaling. These figures should give some sense of performance scaling across a range of CPUs.

Python Linux 4.4.2 Kernel Compile Benchmark

This is one of the most requested benchmarks for STH over the past few years. The task was simple, we have a standard configuration file, the Linux 4.4.2 kernel from kernel.org, and make the standard auto-generated configuration utilizing every thread in the system. We are expressing results as compiles per hour to make the results easier to read.

Inspur-Systems-NF8260M5-4P-Linux-Kernel-Compile-Benchmark

Inspur Systems NF8260M5 4P Linux Kernel Compile Benchmark

Here we saw a fairly good performance from the quad Intel Xeon Platinum 8276L setup. We have been running this test for some time now so we can say that the Intel Xeon Gold 6242 performs better than the quad-socket Intel Xeon E7 V3 family, and only the Intel Xeon E7-8890 V4 4P configuration was able to best the Gold 6242 configuration here. Still, we are showing the performance of different CPU options within the same machine here.

c-ray 1.1 Performance

We have been using c-ray for our performance testing for years now. It is a ray tracing benchmark that is extremely popular to show differences in processors under multi-threaded workloads. We are going to use our new Linux-Bench2 8K render to show differences.

Inspur-Systems-NF8260M5-4P-c-ray-8K-Benchmark

Inspur Systems NF8260M5 4P C Ray 8K Benchmark

We see the Intel Xeon Gold series as where most customers will buy unless they have specific deals or needs in the Intel Xeon Platinum line. One can see that additional cores help the quad Intel Xeon Platinum 8260 setup here.

7-zip Compression Performance

7-zip is a widely used compression/ decompression program that works cross-platform. We started using the program during our early days with Windows testing. It is now part of Linux-Bench.

Inspur-Systems-NF8260M5-4P-7zip-Compression-Benchmark

Inspur Systems NF8260M5 4P 7zip Compression Benchmark

For compression workloads, again popular when moving data in AI and deep learning training servers, we get a solid step function for performance scaling.

OpenSSL Performance

OpenSSL is widely used to secure communications between servers. This is an important protocol in many server stacks. We first look at our sign tests:

Inspur-Systems-NF8260M5-4P-OpenSSL-Sign-Benchmark

Inspur Systems NF8260M5 4P OpenSSL Sign Benchmark

Here are the verify results:

Inspur-Systems-NF8260M5-4P-OpenSSL-Verify-Benchmark

Inspur Systems NF8260M5 4P OpenSSL Verify Benchmark

As one can see, this is another test where performance scales well with both cores and clock speeds.

Chess Benchmarking

Chess is an interesting use case since it has almost unlimited complexity. Over the years, we have received requests to bring back chess benchmarking. We have been profiling systems and are ready to start sharing results:

Inspur-Systems-NF8260M5-4P-Chess-Benchmark

Inspur Systems NF8260M5 4P Chess Benchmark

On the chess side, we see great performance scaling up to 112 cores/ 224 threads with the Intel Xeon Platinum 8276L configuration.

STH STFB KVM Virtualization Testing with Optane DCPMM

One of the other workloads we wanted to share is from one of our DemoEval customers. We have permission to publish the results, but the application itself being tested is closed source. This is a KVM virtualization based workload where our client is testing how many VMs it can have online at a given time while completing work under the target SLA. Each VM is a self-contained worker.

Inspur-Systems-NF8260M5-4P-KVM-Virtualization-STH-STFB-2-Benchmark

Inspur Systems NF8260M5 4P KVM Virtualization STH STFB 2 Benchmark

Here, one can see the impact of using Intel Optane DCPMM in the system as we have more memory available to the system and therefore get better performance. The deltas here are also being driven by CPU performance on the virtualized workloads which is why we are seeing a separation between the three different SKU levels. If we had simply used all 32GB DIMMs, with a 1.5TB memory capacity, we would have been completely memory capacity limited. Here at least we can see a combination performance improvement due to core performance and memory expansion with Intel Optane DCPMM.

The company also has a CPU-light back-end workload that is mostly dependent on Redis performance and memory capacity with less of a CPU stressor.

Inspur-Systems-NF8260M5-4P-KVM-Virtualization-STH-STFB-3-Benchmark

Inspur Systems NF8260M5 4P KVM Virtualization STH STFB 3 Benchmark

Here we can see the impact of the 2nd Generation Intel Xeon Scalable SKU differentiation in the system. The quad Intel Xeon Platinum 8276L configuration leads here by a large margin.

Next, we are going to take a look at the Inspur Systems NF8260M5 storage and networking performance before moving on to power and the STH Server Spider.

Inspur NF5468M5 Storage Performance

We tested a few different NVMe storage configurations because this is one of the Inspur Systems NF5468M5 key differentiation points. Previous generation servers often utilized a single NVMe storage device if any at all. There are eight SAS3 / SATA bays available but we are assuming those are being used for OS/ bulk storage given the system’s design. Instead, we are testing the four NVMe drives that will likely be used for high-performance storage.

Inspur-Systems-NF8260M5-Storage-Performance

Inspur Systems NF8260M5 Storage Performance

Here one can see the impressive performance from the NVMe SSDs. When put into the context of arrays, the performance of NVMe SSDs clearly outpaces the SAS and SATA counterparts while offering excellent capacity.

With the Inspur NF8260M5’s flexible storage backplane and rear I/O riser configuration, one can configure the system to host a large number of NVMe SSDs. In the coming months, NVMe SSDs will continue to displace both SATA and SAS3 alternatives making this chassis well equipped for trends in the server storage market.

Also, we have shown that we are testing this system with Optane DCPMM. Using AppDirect mode with the DCPMM modules can yield well over 100GB/s of storage throughput. We focused our performance testing on Memory Mode, however in the next 12 to 18 months we expect more applications will be able to use DCPMM in AppDirect mode greatly enhancing the storage story of the NF8260M5.

Inspur NF8260M5 Networking Performance

We used the Inspur Systems NF8260M5 with a port dual Mellanox ConnectX-4 Lx 25GbE OCP NIC. The server itself supports a riser configuration for more, this is simply all we could put in our test system.

Inspur-Systems-NF8260M5-Network-Performance

Inspur Systems NF8260M5 Network Performance

Networking is an important aspect as CSPs are commonly deploying 25GbE infrastructure and many deep learning clusters are using EDR Infiniband as their fabric of choice or 100GbE for moving data from the network to GPUs. We would be tempted to move to 100GbE if we had GPUs connected to this system and more riser slots.

Next, we are going to take a look at the Inspur Systems NF8260M5 power consumption before looking at the STH Server Spider for the system and concluding with our final words.

Inspur NF8260M5 Power Consumption

Our Inspur NF8260M5 test server used a quadruple power supply configuration, and we wanted to measure how it performed using the Intel Xeon Platinum 8276L CPUs.

  • Idle: 0.77kW
  • STH 70% Load: 1.1kW
  • 100% Load: 1.3kW
  • Maximum Recorded: 1.5kW

Compared to dual socket systems this may seem low, but one must remember that this is a 112 core system with 6.75TB of memory in its DIMM channels and 24 storage devices. This is a very large system indeed.

Inspur claims that the 4-socket design saves around 87W over two dual-socket servers yielding around $522 in three year TCO savings. We were told at OCP Summit 2019 that this $6 per watt figure is from one of Inspur’s CSP customers. For 2U4N testing, we have a methodology to validate this type of claim, see How We Test 2U 4-Node System Power Consumption. We do not have that for four-socket servers but may add this in the future.

Inspur-NF8260M5-4P-Server-Savings-at-OCP-Summit-2019

Inspur NF8260M5 4P Server Savings At OCP Summit 2019

Note these results were taken using a 208V Schneider Electric / APC PDU at 17.7C and 72% RH. Our testing window shown here had a +/- 0.3C and +/- 2% RH variance.

STH Server Spider: Inspur NF8260M5

In the second half of 2018, we introduced the STH Server Spider as a quick reference to where a server system’s aptitude lies. Our goal is to start giving a quick visual depiction of the types of parameters that a server is targeted at.

STH-Server-Spider-Inspur-Systems-NF8260M5

STH Server Spider Inspur Systems NF8260M5

As you can see, the Inspur Systems NF8260M5 is designed to provide a large node in a 2U form factor. While 2U4N designs offer more compute density in terms of cores per U, the 2U 4-socket design is very dense for a 4-socket server. This gives 48 DIMM slots for DDR4 or Optane DCPMM in a single 2U chassis. The Inspur NF8260M5 has the ability to house GPUs internally, and externally via expansion chassis.

Final Words

There are two aspects to this review. First, is that the Inspur Systems NF8260M5 offers a strong 4-socket platform for the CSP market. After using the server for some time, we see how Inspur and Intel designed the system for broader use. Using industry standard BMC and management functionality, OCP NICs instead of proprietary NICs, and standard components, we see this as very attractive to a class of customers who want to use open platforms to manage their infrastructure. For example, if you wanted an OpenStack private cloud or Kubernetes cluster with 4-socket nodes, this is the type of design you would want.

We found the performance to be excellent and the server to be highly serviceable. Inspur has a vision for this platform well beyond the single node that includes external JBOFs and JBOGs for building massive x86 nodes. We think that the OCP community will find interesting uses for the Crane Mountain platform.

×