gptkbp:instance_of
|
gptkb:Graphics_Processing_Unit
|
gptkbp:ai
|
gptkb:Tensor_Flow
gptkb:Caffe
gptkb:MXNet
gptkb:Py_Torch
624 TOPS
|
gptkbp:aiinference_performance
|
2x faster than previous generation
|
gptkbp:aitraining_performance
|
20x faster than previous generation
|
gptkbp:application
|
gptkb:machine_learning
gptkb:Deep_Learning
High Performance Computing
data analytics
high-performance computing
AI inference
AI training
|
gptkbp:architecture
|
gptkb:Ampere
|
gptkbp:connects
|
gptkb:NVIDIA_NVLink_3.0
gptkb:PCI_Express_4.0
gptkb:NVIDIA_NVLink
|
gptkbp:designed_by
|
gptkb:NVIDIA
|
gptkbp:die_size
|
826 mm²
|
gptkbp:features
|
gptkb:Third_Generation_Tensor_Cores
gptkb:NVIDIA_NVSwitch
gptkb:Multi-Instance_GPU_(MIG)
gptkb:NVIDIA_NVLink
|
gptkbp:form_factor
|
gptkb:A100_PCIe
gptkb:A100_SXM4
gptkb:PCIe
SXM4
|
gptkbp:gpuarchitecture
|
gptkb:Ampere
|
gptkbp:has_ability
|
8.0
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA A100 Tensor Core GPU
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:manufacturing_process
|
7 nm
|
gptkbp:memory_type
|
gptkb:HBM2
|
gptkbp:multi-instance_gpu
|
gptkb:Yes
|
gptkbp:network
|
gptkb:Yes
|
gptkbp:number_of_cores
|
gptkb:Yes
6912
|
gptkbp:performance
|
9.7 TFLOPS
156 TFLOPS (Tensor Float 32)
312 TFLOPS
20x higher than previous generation
19.5 TFLOPS (FP32)
|
gptkbp:power_consumption
|
400 W
|
gptkbp:powers
|
400 W
|
gptkbp:provides_support_for
|
gptkb:NVIDIA_Tensor_RT
gptkb:NVIDIA_cu_DNN
gptkb:NVIDIA_CUDA_Toolkit
gptkb:NVIDIA_NGC
|
gptkbp:ram
|
40 GB
1555 GB/s
|
gptkbp:ray_tracing_support
|
No
|
gptkbp:release_date
|
May 2020
|
gptkbp:released_in
|
gptkb:2020
|
gptkbp:resolution
|
7680 x 4320
|
gptkbp:slisupport
|
gptkb:Yes
|
gptkbp:support
|
gptkb:NVIDIA_Tensor_RT
gptkb:NVIDIA_DGX_Systems
gptkb:NVIDIA_HGX_Systems
gptkb:NVIDIA_RAPIDS
gptkb:NVIDIA_Clara
|
gptkbp:supports
|
gptkb:Vulkan
gptkb:Tensor_Cores
gptkb:Direct_X
gptkb:CUDA
gptkb:Open_CL
|
gptkbp:target_market
|
gptkb:cloud_computing
Data centers
Research Institutions
Data Centers
Supercomputing
Enterprise AI
|
gptkbp:tensor_operations
|
gptkb:TF32
FP16
INT8
FP32
BF16
INT4
|
gptkbp:thermal_design_power
|
400 W
|
gptkbp:transistor_count
|
54 billion
|
gptkbp:virtualization_support
|
gptkb:NVIDIA_v_GPU
gptkb:Yes
|
gptkbp:bfsParent
|
gptkb:NVIDIA_Corporation
gptkb:NVIDIA
|
gptkbp:bfsLayer
|
4
|