BLOG
Low-Latency Data Acceleration for Cloud-Scale Infrastructure
As real-time analytics and machine learning are being integrated into virtually every mainstream enterprise application, there is a growing unmet need for cost-effective infrastructure to support high throughput, low latency workloads at Cloud scale. Until recent innovations from Intel®, delivering high value workloads with Cloud economics has forced infrastructure architects to make difficult, but necessary tradeoffs. Vexata™ has developed a software solution that fully utilizes these processor and non-volatile memory technology innovations to directly address the challenges that affect the delivery of these premium, high performance application tiers. These infrastructure challenges include, but are not limited to:
- Hosting high throughput, low-latency applications at Cloud scale while reducing costs
- Protecting massive metadata volumes without expensive NVRAM or battery backup
- Delivering infrastructure that is fault tolerant without requiring specialized hardware
Cloud providers and enterprise infrastructure managers have traditionally taken very different architectural approaches to address I/O intensive workloads, each with a unique set of advantages and disadvantages. Enterprise infrastructure has historically been built upon costly, proprietary hardware-centric systems that scale up to optimize performance. Cloud providers, on the other hand, build infrastructure using standardized server hardware with specialized scale-out software building large clustered compute farms that are optimized for cost and massive scaling, but these systems are challenged to maintain low-latency I/O at this scale.
Intel® Technology Innovations Enable New Cloud Platforms:
With the availability of the latest generation of Intel Xeon® Scalable processors, Intel Optane™ DC persistent memory, Vexata systems can now aggregate petabytes of high performance, NVMe solid state media with unmatched performance and resilience. This aggregated capacity, when coupled with standard PCIe FPGA co-processors, accelerates the performance of these compute systems, providing data access that is optimized for high performance transactional and analytic applications. The recently announced Vexata VX-Cloud Data Acceleration Platform is built with a software architecture that fully utilizes these new technologies. Built from the same VX-OS software architecture as the VX-100 NVMe Array, VX-Cloud uses high performance Intel powered compute platforms and NVMe SSDs instead of a hardware appliance to deliver cost-effective performance at scale.
Figure 1: Intel innovations on XEON Scalable Processors and Optane Persistent Memory
Utilizing the latest Intel Xeon scalable processor and Optane DC persistent memory innovations, VX-Cloud can now scale to orders of magnitude greater capacity, throughput and IOPs independently, and do so at cost points that are 30-50% lower than previously possible.
Vexata VX-Cloud running on server infrastructure powered by Intel Xeon Scalable processors and Intel Optane DC persistent memory, achieves the following core objectives:
- Enables systems and server partners to deploy compute platforms, equipped with densely packed NVMe solid state media targeted at high-performance applications.
- Align very closely with the cloud service providers, enabling deployment of a software architecture that simultaneously accesses petabytes of data for computational HPC, analytics, AI and machine learning workloads.
- Guarantee resilience and fault tolerance, protecting critical metadata through a persistence layer (Intel Optane DC persistent memory) that scales to terabytes of Intel® 3D XPoint™ non-volatile capacity. This eliminates the need for costly NVRAM, battery backup or super caps to provide resilience.
Advantages of Advanced Software Architectures
In order to deliver upon these critical performance parameters, the solution must be able to ingest large volumes of data without introducing latency that impacts application response times. With the Vexata VX-100 NVMe array, this is achieved through a tightly integrated hardware and software stack that ensures that internal latencies are kept below 10uS. VX-100 metadata resides in DRAM (volatile) and utilizes super capacitors for resilience. VX-Cloud on the other hand, runs on standard servers and addresses metadata resilience through Intel Optane DC persistent memory, maintaining performance and scale, but doing so at significant cost savings.
Figure 2: Vexata VX-Cloud architecture
As shown in figure 2 above, VX-Cloud features a 3-stage architecture, built from the same DNA as the VX-100, resulting in a software architecture that:
1) Accelerates I/O through powerful FPGA co-processors
2) Distributes data with very low latency using a passive ethernet backplane
3) Aggregates petabytes of large capacity NVMe SSDs within a densely packed server
Delivering Low Latency Performance at Cloud Scale
Working closely with major server partners, Vexata VX-Cloud is the first solution that serves as the cornerstone for hyper-scale, low latency data access. By maintaining latencies under 200uS, throughputs exceeding 100GB/s and more than 10 million IOPs as documented here. VX-Cloud provides cost and performance advantages to Cloud infrastructure providers delivering services built upon database, analytics and machine learning applications.
Figure 3: VX-Cloud capabilities enabled by Intel Optane persistent memory
To learn more about Vexata VX-Cloud, powered by Intel Scalable Xeon Scalable processors and Optane DC persistent memory, please visit https://www.vexata.com/product/vx-cloud-software/