Cascade Lake (CSL/CLX) is Intel’s successor to Skylake, a 14 nm microarchitecture for enthusiasts and servers. In Intel’s Process-Architecture-Optimization model, Cascade Lake is an optimization of Skylake. Intel states that this will be their first microarchitecture to support 3D XPoint-based memory modules. 

It also features Deep Learning Boost instructions and mitigations for Meltdown and Spectre while still providing higher overall performance, as well as Optane support, faster DRAM, new configurations, and better specialization. We’re especially excited to talk about Intel’s new Cascade Lake processors because we just released our Enterprise 2-Dodeca which is powered by this awesome technology which takes performance to a whole new level.

For desktop enthusiasts, Cascade Lake is branded Core i7 and Core i9 processors (under the Core X series). For scalable server-class processors, Xeon processors still come in the new ‘Platinum / Gold / Silver /Bronze’ nomenclature, but this time offers up to 56 cores.

Cascade Lake Release Date

Cascade Lake was released on April 2, 2019. Cascade Lake W for workstations was released on June 3, 2019. Intel officially launched new Xeon Scalable SKUs on February 24, 2020, amounting to 86 different processors for the Cascade Lake family.

Cascade Lake Servers

On the server end, Cascade Lake introduces initial in-hardware Spectre and Meltdown mitigation, including Variant 2, 3, and L1TF. Chips are fabricated on an enhanced 14 nm process, which allows Intel to extract an additional power efficiency, allowing them to clock those processors higher.

Key changes from Skylake

  • Up to 56 cores, 12 DDR4 channels
  • New VNNI logic on Port 0 and Port 1 as part of the FMAs
    • Higher frequency (100-300 MHz higher for both base/turbo)
    • Security
      • Hardware mitigations for CVE-2017-5715 (Spectre, Variant 2)
      • Hardware mitigations for CVE-2017-5754 (Meltdown, Variant 3)
      • Hardware mitigations for CVE-2018-3640 (Rogue System Register Read (RSRE), Variant 3a)
      • Hardware mitigations for CVE-2018-3620/CVE-2018-3646 (L1 Terminal Fault, Foreshadow)
      • Hardware mitigations for CVE-2018-12130/CVE-2018-12126/CVE-2018-12127/CVE-2019-11091 (MDS; MFBDS, RIDL, MSBDS, Fallout, MLPDS, MDSUM)
        • note that while steppings 6 & 7 are fully mitigated, earlier stepping 5 is not protected against MSBDS, MLPDS, nor MDSUM
    • New CPUID Level Type field for “die”.
  • Integrated Memory Controller
    • Added support for persistent memory
      • Support for DDR-T / Optane DIMMs
        • Apache Pass DIMMs
  • Memory
    • Higher data rate (2933 MT/s, up from 2666 MT/s)
    • Standard support for up to 1 TiB per socket (up from 768 GiB)
    • Extended memory support for up to 2 TiB per socket (up from 1.5 TiB)
    • Large memory support for up to 4.5 TiB per socket
  • I/O
    • x64 PCIe lanes exposed to the platform (up from x48) (Xeon W only)

Overview

Cascade Lake is Intel’s direct successor to the Skylake server microarchitecture. It is designed to be compatible with the Skylake parts (LGA-3647) and utilize the Purley platform. To that end, Cascade Lake shares the same socket and pinout as well as the same core count, cache size, and I/O capabilities.

Cascade Lake introduces initial in-hardware Spectre and Meltdown mitigation, including Variant 2, 3, and L1TF. Chips are fabricated on an enhanced 14 nm process, which allows Intel to extract an additional power efficiency, allowing them to clock those processors higher. 

Intel noted that targeted performance improvements were applied to some of the critical paths to make this possible. Although the core architecture is largely identical to that of Skylake, Cascade Lake introduces support for AVX512VNNI, which is designed to improve the performance of Artificial Intelligence workloads by improving the throughput of tight inner convolutional loop operations.

The chief modification to Cascade Lake is the overhauling of the integrated memory controller in order to introduce support for persistent memory. The IMC on Cascade Lake is capable of interfacing with both DDR4 DIMMs and Intel’s Optane DC DIMMs. 

Memory channels can be shared between DDR4 and Optane DC modules. For example, a single channel can have one regular DDR4 DIMM, while the other DIMM can be an Optane DC DIMM. All in all, Optane DC DIMMs allow for greater than 3 TiB of system memory per socket.

A superset model is shown on the right. Cascade Lake-based servers make use of Intel’s mesh interconnect architecture. In this configuration, the cores, caches, and the memory controllers are organized in rows and columns – each with dedicated connections going through each of the rows and columns, allowing for the shortest path between any tile, reducing latency, and improving the bandwidth. Those processors are offered from 4 cores up to 28 cores with 8 to 56 threads.

All models incorporate six channels of DDR4 supporting up to 12 DIMMS for a total of 1.5 TiB (with extended models support 3 TiB). For I/O, all models incorporate 48x (3×16) lanes of PCIe 3.0. 

There are an additional x4 lanes PCIe 3.0 reserved exclusively for DMI for the Lewisburg (LBG) chipset. For a selected number of models, specifically those with F suffix, they have an Omni-Path Host Fabric Interface (HFI) on-package (see Integrated Omni-Path).

Cascade Lake processors are designed for scalability, supporting 2-way, 4-way, and 8-way multiprocessing through Intel’s Ultra Path Interconnect (UPI) interconnect links, with two to three links being offered (see § scalability). High-end models have node controller support allowing for even higher way configuration (e.g., 32-way multiprocessing)

Top benefits

  • Faster time to value with Intel Select Solutions
  • Strong, capable platforms for the data-fueled enterprise
  • Next-generation platform for cloud-optimized, 5G-ready networks, and next-generation virtual networks
  • Breakthrough HPC and high-performance data analytics innovation

Foundational Enhancements

  • Higher Per-Core Performance: Up to 56 cores (9200 series) and up to 28 cores (8200 series), delivering high-performance and scalability for compute-intensive workloads across compute, storage, and network usages
  • Greater Memory Bandwidth/Capacity: Support for Intel Optane DC persistent memory, supporting up to 36 TB of system-level memory capacity when combined with traditional DRAM. 50% Increased memory bandwidth and capacity. Support for six memory channels and up to 4 TB of DDR4 memory, per socket, with speeds up to 2933 MT/s (1 DPC)
  • Expanded I/O: 48 lanes of PCIe 3.0 bandwidth and throughput for demanding I/O-intensive workloads
  • Intel Ultra Path Interconnect (Intel UPI): Four Intel Ultra Path Interconnect (Intel UPI) (9200 series) and up to three Intel Ultra Path Interconnect (Intel UPI) (8200 series) channels increase scalability of the platform to as many as two sockets (9200 series) and up to eight sockets (8200 series). Intel Ultra Path Interconnect (Intel UPI) offers the perfect balance between improved throughput and energy efficiency
  • Intel Deep Learning Boost (Intel DL Boost) with VNNI: New Intel Deep Learning Boost (Intel DL Boost) with Vector Neural Network Instruction (VNNI) bring enhanced artificial intelligence inference performance, with up to 30X performance improvement over the previous generation4, 2nd Gen Intel Xeon Scalable processors help to deliver AI readiness across the data center, to the edge and back
  • Intel Infrastructure Management Technologies (Intel IMT): A framework for resource management, Intel Infrastructure Management Technologies (Intel IMT), combines multiple Intel capabilities that support platform-level detection, reporting, and configuration. This hardware-enhanced monitoring, management, and control of resources can help enable greater data center resource efficiency and utilization
  • Intel Security Libraries for Data Center (Intel SecL-DC): A set of software libraries and components, Intel SecL-DC, enables Intel hardware-based security features. The open-source libraries are modular and have a consistent interface. They can be used by customers and software developers to more easily develop solutions that help secure platforms and help protect data using Intel hardware-enhanced security features at cloud scale
  • Intel Advanced Vector Extensions 512 (Intel AVX-512): With double the FLOPS per clock cycle compared to previous-generation Intel Advanced Vector Extensions 2 (Intel AVX2), Intel AVX-512 boosts performance, and throughput for the most demanding computational tasks in applications, such as modeling and simulation, data analytics and machine learning, data compression, visualization, and digital content creation
  • Security without compromise: Limiting encryption overhead and performance on all secure data transactions

Powerful CPUs, Powerful Servers

With companies like ServerPronto, you can have a dedicated server with up to 48 cores (Quad-Dodeca Monster) alongside other great benefits like fast provisioning, 24/7/365 award-winning support, and a fault-tolerant network that guarantees 100% SLA uptime.

ServerPronto offers affordable and secure hosting service in all dedicated server packages.


Sources:

  1. Cascade Lake (microarchitecture)
  2. The Intel Second Generation Xeon Scalable: Cascade Lake, Now with Up To 56-Cores and Optane! 
  3. It’s a Cascade of 14nm CPUs
  4. Intel – Cascade Lake
  5. 2nd Gen Intel® Xeon® Scalable Processors Brief
  6. Xeon Scalable – Intel  
Author

Chief Tech writer at Serverpronto. Helping businesses grow with useful tech information.

Comments are closed.