gptkbp:instanceOf
|
gptkb:graphics_card
|
gptkbp:architecture
|
gptkb:Ampere
|
gptkbp:formFactor
|
gptkb:SXM4
gptkb:PCIe
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA A100 GPUs
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:memoryBusWidth
|
1555 GB/s
|
gptkbp:memoryType
|
gptkb:HBM2e
|
gptkbp:partOf
|
gptkb:NVIDIA_Data_Center_GPU_family
|
gptkbp:peakFP64Performance
|
19.5 TFLOPS
9.7 TFLOPS
|
gptkbp:peakTensorPerformance
|
312 TFLOPS
|
gptkbp:predecessor
|
gptkb:NVIDIA_V100_GPUs
|
gptkbp:processNode
|
7nm
|
gptkbp:RAM
|
40 GB
80 GB
|
gptkbp:releaseDate
|
2020
|
gptkbp:successor
|
gptkb:NVIDIA_H100_GPUs
|
gptkbp:supports
|
gptkb:CUDA
gptkb:NVLink
gptkb:PCIe_4.0
gptkb:NVIDIA_Triton_Inference_Server
gptkb:NVIDIA_RAPIDS
gptkb:NVIDIA_TensorRT
gptkb:NVIDIA_cuDNN
gptkb:NVSwitch
gptkb:SR-IOV
gptkb:Secure_Boot
gptkb:ECC_memory
gptkb:Tensor_Cores
gptkb:MIG_(Multi-Instance_GPU)
gptkb:NVIDIA_CUDA_Toolkit
gptkb:NVIDIA_GPU_Cloud_(NGC)
Direct Memory Access
BF16
INT8
FP32
FP64
TF32
Multi-GPU scaling
|
gptkbp:TDP
|
400 W
|
gptkbp:usedFor
|
gptkb:artificial_intelligence
gptkb:machine_learning
data analytics
high performance computing
|
gptkbp:usedIn
|
cloud computing
data centers
supercomputers
|
gptkbp:bfsParent
|
gptkb:Delta_supercomputer
|
gptkbp:bfsLayer
|
6
|