gptkbp:instanceOf
|
AI system
|
gptkbp:coreCount
|
8
|
gptkbp:CPUCount
|
2
|
gptkbp:formFactor
|
rackmount
|
gptkbp:GPU
|
gptkb:NVIDIA_A100
|
gptkbp:GPUArchitecture
|
gptkb:Ampere
|
gptkbp:GPUmemory
|
40 GB or 80 GB per GPU
|
https://www.w3.org/2000/01/rdf-schema#label
|
DGX A100
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:marketedAs
|
data centers
research
enterprise AI
|
gptkbp:maxFP16Performance
|
5 petaFLOPS
|
gptkbp:maxFP64Performance
|
156 teraFLOPS
|
gptkbp:maxINT8Performance
|
10 petaOPS
|
gptkbp:network
|
gptkb:8x_200Gb/s_Mellanox_ConnectX-6
|
gptkbp:operatingSystem
|
gptkb:Ubuntu_Linux
|
gptkbp:powerSource
|
6,500W
|
gptkbp:predecessor
|
gptkb:DGX-2
|
gptkbp:processor
|
gptkb:AMD_EPYC_7742
|
gptkbp:productType
|
gptkb:NVIDIA_DGX
|
gptkbp:purpose
|
data analytics
AI inference
AI training
|
gptkbp:rackUnits
|
6U
|
gptkbp:RAM
|
1 TB
|
gptkbp:releaseYear
|
2020
|
gptkbp:storage
|
15 TB NVMe SSD
|
gptkbp:successor
|
gptkb:DGX_H100
|
gptkbp:supportsMultiInstanceGPU
|
yes
|
gptkbp:supportsNVLink
|
yes
|
gptkbp:supportsPCIeVersion
|
yes
|
gptkbp:technology
|
gptkb:CUDA
gptkb:TensorFlow
gptkb:TensorRT
gptkb:cuDNN
gptkb:NVIDIA_GPU_Cloud
gptkb:PyTorch
|
gptkbp:totalGPUMemory
|
320 GB or 640 GB
|
gptkbp:usedBy
|
gptkb:Google
gptkb:Microsoft
gptkb:OpenAI
gptkb:NVIDIA
gptkb:Meta
|
gptkbp:usedFor
|
gptkb:simulation
deep learning
scientific computing
large language models
cloud AI infrastructure
|
gptkbp:bfsParent
|
gptkb:NVIDIA_DGX
gptkb:Nvidia_DGX
|
gptkbp:bfsLayer
|
6
|