// Versatile compute acceleration for mainstream enterprise servers

NVIDIA A30

Bring accelerated performance to every enterprise workload with NVIDIA A30 Tensor Core GPUs. With NVIDIA Ampere architecture Tensor Cores and Multi-Instance GPU (MIG), it delivers speedups securely across diverse workloads, including AI inference at scale and high-performance computing (HPC) applications.

By combining fast memory bandwidth and low-power consumption in a PCIe form factor—optimal for mainstream servers—A30 enables an elastic data center and delivers maximum value for enterprises.

// The Data Center Solution for Modern IT

Powered by the NVIDIA Ampere Architecture

The NVIDIA Ampere architecture is part of the unified NVIDIA EGX platform, incorporating building blocks across hardware, networking, software, libraries, and optimized AI models and applications from the NVIDIA NGC catalog. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to rapidly deliver real-world results and deploy solutions into production at scale.

NVIDIA A30 features FP64 NVIDIA Ampere architecture Tensor Cores that deliver the biggest leap in HPC performance since the introduction of GPUs. Combined with 24 gigabytes (GB) of GPU memory with a bandwidth of 933 gigabytes per second (GB/s), researchers can rapidly solve double-precision calculations. HPC applications can also leverage TF32 to achieve higher throughput for single-precision, dense matrix-multiply operations.

Accelerate Next-Generation Workloads

  • Deep Learning Training
  • Deep Learning Inference
  • High-Performance Computing
  • High-Performance Data Analytics
  • Enterprise-Ready Utilization
GB HBM2

GPU Memory

25
TFLOPS

Single-Precision Performance

10.3
TFLOPS

AI Tensor Performance (FP16)

330

// Still not sure what is the best GPU for you? We are ready to assist you.

NEED A CONSULTATION?