gptkbp:instanceOf
|
gptkb:graphics_card
|
gptkbp:architecture
|
gptkb:Ampere
|
gptkbp:coreCount
|
6912
|
gptkbp:formFactor
|
gptkb:SXM4_module
PCIe card
|
https://www.w3.org/2000/01/rdf-schema#label
|
Nvidia A100 GPU
|
gptkbp:interface
|
gptkb:NVLink
gptkb:PCIe_4.0
|
gptkbp:manufacturer
|
gptkb:Nvidia
|
gptkbp:memoryType
|
gptkb:HBM2
|
gptkbp:predecessor
|
gptkb:Nvidia_V100_GPU
|
gptkbp:processNode
|
7 nm
|
gptkbp:RAM
|
40 GB
80 GB
|
gptkbp:releaseDate
|
May 2020
|
gptkbp:successor
|
gptkb:Nvidia_H100_GPU
|
gptkbp:supports
|
gptkb:CUDA
gptkb:TensorFlow
gptkb:NVLink
gptkb:Multi-Instance_GPU_(MIG)
gptkb:NVSwitch
gptkb:NVIDIA_NVLink_3.0
gptkb:PCI_Express_Gen_4
gptkb:PyTorch
gptkb:ECC_memory
gptkb:NVIDIA_GPUDirect
deep learning
scientific computing
virtualization
AI inference
Direct Memory Access
AI model training
FP16
INT8
multi-GPU scaling
FP32
FP64
TF32
|
gptkbp:targetMarket
|
gptkb:artificial_intelligence
gptkb:cloud_service
high-performance computing
|
gptkbp:TDP
|
400 W
|
gptkbp:Tensor_Cores
|
432
|
gptkbp:transistorCount
|
54.2 billion
|
gptkbp:usedIn
|
gptkb:NVIDIA_DGX_A100
cloud computing
supercomputers
|
gptkbp:bfsParent
|
gptkb:Nvidia_NVLink
|
gptkbp:bfsLayer
|
6
|