gptkbp:instanceOf
|
AI supercomputer
|
gptkbp:countryOfOrigin
|
gptkb:United_States
|
gptkbp:CUDAComputeCapability
|
8.0
|
gptkbp:formFactor
|
rack-mounted
|
gptkbp:GPU
|
Nvidia A100 40GB
|
https://www.w3.org/2000/01/rdf-schema#label
|
Nvidia DGX A100
|
gptkbp:manufacturer
|
gptkb:Nvidia
|
gptkbp:maxGPUtoGPUBandwidth
|
600 GB/s
|
gptkbp:maximumSpeed
|
5 petaFLOPS AI performance
|
gptkbp:maxSystemMemoryBandwidth
|
340 GB/s
|
gptkbp:network
|
gptkb:8x_200Gb/s_Mellanox_ConnectX-6
|
gptkbp:numberOfGPUs
|
8
|
gptkbp:operatingSystem
|
gptkb:Ubuntu_Linux
|
gptkbp:powerSource
|
6.5 kW
|
gptkbp:predecessor
|
gptkb:Nvidia_DGX-2
|
gptkbp:processor
|
gptkb:Nvidia_A100_Tensor_Core_GPU
2x AMD EPYC 7742
|
gptkbp:RAM
|
1 TB DDR4
|
gptkbp:releaseDate
|
2020
|
gptkbp:storage
|
15 TB NVMe SSD
|
gptkbp:successor
|
gptkb:Nvidia_DGX_H100
|
gptkbp:supportsNVLink
|
yes
|
gptkbp:supportsPCIeVersion
|
yes
|
gptkbp:technology
|
gptkb:Nvidia_GPU_Cloud
gptkb:Nvidia_CUDA
gptkb:Nvidia_TensorRT
gptkb:Nvidia_cuDNN
|
gptkbp:uses
|
gptkb:machine_learning
AI research
deep learning
|
gptkbp:bfsParent
|
gptkb:Nvidia_DGX_systems
|
gptkbp:bfsLayer
|
7
|