Description
NVIDIA Tesla A100 Specs
Technical specifications
GPU Architecture: NVIDIA Ampere
Double-Precision Performance: FP64: 9.7 TFLOPS FP64 Tensor Core: 19.5 TFLOPS
Single-Precision Performance: FP32: 19.5 TFLOPS Tensor Float 32 (TF32): 156 TFLOPS | 312 TFLOPS*
Half-Precision Performance: 312 TFLOPS | 624 TFLOPS*
Bfloat16: 312 TFLOPS | 624 TFLOPS*
Integer Performance: INT8: 624 TOPS | 1,248 TOPS* INT4: 1,248 TOPS | 2,496 TOPS*
GPU Memory: 40 GB HBM2
Memory Bandwidth: 1.6 TB/sec
Error-Correcting Code: Yes
Interconnect Interface: PCIe Gen4: 64 GB/ sec Third generation NVIDIA® NVLink®: 600 GB/sec**
Form Factor: 4/8 SXM GPUs in NVIDIA HGX™ A100
Multi-Instance GPU (MIG): Up to 7 GPU instances
Max Power Consumption: 400W
Delivered Performance for Top Apps: 100%
Thermal Solution: Passive
Compute APIs: CUDA®, DirectCompute, OpenCL™, OpenACC®
Reviews
There are no reviews yet.