1.What is edge computing?
Edge computing is an open platform that integrates network, computing, storage and application core capabilities on the edge of the network near objects or data sources. Edge computing and cloud computing work together to drive digital transformation in all walks of life. It provides intelligent interconnection services nearby to meet the industry's key requirements in digital transformation, such as business real-time, business intelligence, data aggregation and interoperability, security and privacy protection. By 2020, each person will produce 1.7 mb of data per second and 237 million wearables will be shipped, according to a study by Itu-t, the International Telecommunication Union's division of telecommunications standards. IDC also predicts that 50% of iot networks will face bandwidth constraints by 2018, and 40% of data will need to be analyzed, processed and stored on the edge of the network, with more than 50% by 2025.
2.What are the advantages of edge computing over cloud computing?
1) Advantage 1: real-time edge computing enables networked devices to process data formed at the "Edge" . There has also been a recent Inspurof self-driving cars, which are themselves high-performance computers that rely on an array of sensors to collect data. To run safely and reliably, it needs to react immediately to its surroundings, and any delay in processing can be fatal. With cloud computing, although data processing takes place primarily in the cloud, it can take several seconds to transfer data back and forth between central servers. The time span for data transfer is too long. Edge computing comes in handy for "real-time" computing, which makes it possible for self-driving cars to process data more quickly on the vehicle side, without having to move data back and forth between the vehicle and the cloud.
2) The second advantage: intelligent network inside there are a lot of functions in the edge node can be directly disposed of. Department heads like you don't have to tell you everything, they can just come up with ideas, plans, and goals. Some of the features in the traditional architecture need to go back to central server processing, but now they can be processed directly at the edge and return the corresponding results. Examples include authentication, log filtering, data consolidation, image processing, and TLS (HTTPS) session settings.
3) Three advantages: data aggregation a physical device often produces a large amount of data, can be filtered at the edge, and then aggregated to the center to do processing, which is to use the edge of computing power. Again, to use that story as an example, the heads of various departments in the company will always have doubts. They will summarize the problems and difficulties faced by their respective departments and report them to you, so what you see is a pretty straightforward data set that they've put together. That's one of the advantages of edge computing.
3.Inspur Edge computing server？
The new Inspur NE5260M5 is designed for the edge computing applications of the 5G era, with a height of 2U, a width of 19 inches, and a depth of 430mm, just over a half the depth of a traditional standard server. This product does not sacrifice configuration performance due to small space, using Intel's latest highly scalable processors coming to market, can be configured with 2 processors, 16 memory slots, 2 of which support AEP memory, motherboard integration of 210G SFP network card, six PCIe-3.0 ports. On the storage side, it supports six HDDS and two 2.5 inch M2 SSD interfaces. Small size and strong performance are the two directions that the tide pursues when designing edge computing servers. This product is a good combination of server technology standards and telecom equipment standards, can be directly mixed with telecom equipment deployed in the telecom center rack, Inspur in the design of NE5260M5, aiming at the extreme deployment environment and business application of edge data center, a lot of optimization techniques are adopted at different levels.
1）Power performance：Single CPU supporting six channels and eight DDR4 DIMM slots at speeds of up to 2933MHz；First Inspur server to support Optane DCPMM for improved performance, efficiency, and stability；Supports up to 6+2 NVMe SSD hard drives, providing low latency and better storage access performance with greater capacity；Designed with full NUMA balancing for higher processing performance.
2）Adaptive scalability:Supports 6 PCIe 3.0 slots;Supports 2 PCIe x16+4 PCIe x8 or 4 PCIe x16;Supports up to 2 double-wide PCIe x16 GPUs at TDP 300W, or 4 FHHL PCIe x16 GPU cards at TDP75W.
3）Adaptability for extreme environments：Compatible with wall-mount configurations, the server can be rack or wall mounted to suit the deployment environment；With a chassis depth of just 430mm, this space-saving server is nearly 1/3 shorter than other general purpose servers；Adaptability: Capable of operating at temperatures between -5℃~50℃, and humidity levels between 5%~85%；Class A electromagnetic compatible, dust-proof, corrosion-resistant, and shock-resistant design ensures compliance with telecommunications standards；AI deployment for edge environments；The advanced 2nd Generation Intel® Xeon® Scalable；Processors supports the latest x86 vector instruction set (AVX512 VNNI), enabling the server to accelerate deep computing and convolutional neural network-based algorithms；Supports up to two double-wide or four single-wide AI accelerator cards, NVIDIA V100/T4 GPUs and FPGA cards with similar capabilities, as well as mixed deployment for flexible solutions for AI applications；Simplified operation and serviceability；Modular design for front-end operation and serviceability, enabling simplified and more efficient server operation and maintenance；Front I/O design with hot and cold air duct isolation for greater thermal efficiency in a data center.