AMAX to Showcase GPU POD Reference Architecture at SC20

Simplified AI Infrastructure at Scale

AMAX

AMAX’s HPC and AI Solutions Group announced it is showcasing an all-new GPU POD Reference Architecture at SC20 from Nov. 17 to 19. AMAX’s GPU POD Reference Architecture incorporates best-of-breed compute, networking, storage, power, and cooling to deliver the fastest application performance and meet the demands of evolving AI workloads at scale.

As the compute block of the AMAX GPU POD Reference Architecture, the AceleMax GPU platforms provide single and dual AMD EPYC™ 7002 CPU options, four or eight NVIDIA® A100 GPUs for up to 10 PetaOPS of AI performance via direct-attach PCI-E 4.0 x16 CPU-to-GPU lanes for the lowest latency and highest bandwidth. These systems support up to two additional high-performance PCI-E 4.0 expansion slots, including SAS interface cards, NVIDIA® Mellanox® 200 Gb/s InfiniBand or Ethernet, to meet the demands of AI workloads with the highest bandwidth, lowest latency and maximum concurrency for full GPU resource utilization.

AMAX’s StorMax all-flash storage solutions feature Excelero NVMesh, an intelligent management storage layer that abstracts underlying hardware with CPU offload, and 200Gb/s NVMe over Fabrics on InfiniBand with NVIDIA Mellanox ConnectX-6 adapters. The StorMax storage blocks in the GPU POD Reference Architecture are the highest-performance, most secure and scalable architecture in class that maximize the utilization of NVIDIA A100 GPUs and the low-latency and high IOPs/BW benefits of NVMe in a distributed and linearly scalable architecture.

“We’re thrilled AMAX selected Excelero for their GPU POD architecture,” said Sven Breuner, Field CTO at Excelero. “There is no easier, faster or more flexible way to deploy a turnkey GPU computing solution that solves the toughest storage problems in AI: small files, random and concurrent access, and near-zero latency requirements. Our joint solution makes it easy for customers of any size to quickly take advantage of the latest GPU, networking, and storage technologies. “

The AMAX GPU POD delivers a validated turnkey parallel compute solution and provides scalable high-performance shared file access that is ideal for all AI workloads. View AMAX’s GPU POD Reference Architecture to see how fully integrated, ready-to-deploy offerings can simplify and accelerate data center AI deployments. Learn more at our SC20 virtual booth and contact us at info@amax.com for technical consultation.

About AMAX

AMAX is an award-winning global leader in application-tailored cloud, data center, open architecture platforms, HPC, Deep Learning and OEM server manufacturing solutions designed towards highest efficiency and optimal performance. Whether you are a Fortune 1000 company seeking significant cost savings through better efficiency for your global data centers or a software startup seeking an experienced manufacturing partner to design and launch your flagship product, AMAX is your trusted solutions provider, delivering the results you need to meet your specific metrics for success.

Source: AMAX

Share:


Categories: Artificial Intelligence and Expert Systems

Tags: AI, AI infrastructure, building blocks, datacenter, deep learning, GPU, HPC, SC20


About AMAX

View Website

AMAX is an award-winning global leader in application-tailored cloud, data center, open architecture platforms, HPC, Deep Learning and OEM server manufacturing solutions designed towards highest efficiency and optimal performance

AMAX
1565 Reliance Way
Fremont, CA 94539
United States