gptkbp:instanceOf
|
AI supercomputer
|
gptkbp:connectivity
|
8x 100 Gb/s InfiniBand
|
gptkbp:connects
|
gptkb:NVSwitch
|
gptkbp:countryOfOrigin
|
gptkb:United_States
|
gptkbp:CPU_model
|
gptkb:Intel_Xeon_Platinum_8168
|
gptkbp:designedFor
|
deep learning
|
gptkbp:formFactor
|
rackmount
|
gptkbp:GPU
|
gptkb:Nvidia_Tesla_V100
|
gptkbp:GPU_memory_per_GPU
|
32 GB
|
gptkbp:height
|
10U
|
https://www.w3.org/2000/01/rdf-schema#label
|
Nvidia DGX-2
|
gptkbp:manufacturer
|
gptkb:Nvidia
|
gptkbp:maximum_performance_(FP16)
|
2 petaFLOPS
|
gptkbp:maximum_performance_(FP64)
|
125 teraFLOPS
|
gptkbp:number_of_CPUs
|
2
|
gptkbp:number_of_GPUs
|
16
|
gptkbp:operatingSystem
|
gptkb:Ubuntu_Linux
|
gptkbp:powerSource
|
10 kW
|
gptkbp:predecessor
|
gptkb:Nvidia_DGX-1
|
gptkbp:priceRange
|
$399,000
|
gptkbp:primaryUse
|
scientific computing
AI inference
AI model training
|
gptkbp:RAM
|
1.5 TB
|
gptkbp:releaseDate
|
2018
|
gptkbp:storage
|
30 TB NVMe SSD
|
gptkbp:successor
|
gptkb:Nvidia_DGX_A100
|
gptkbp:supports
|
gptkb:TensorFlow
gptkb:MXNet
gptkb:Caffe2
gptkb:Nvidia_CUDA
gptkb:Nvidia_cuDNN
gptkb:Microsoft_Cognitive_Toolkit
gptkb:PyTorch
|
gptkbp:targetMarket
|
enterprise AI research
|
gptkbp:total_GPU_memory
|
512 GB
|
gptkbp:weight
|
~350 kg
|
gptkbp:bfsParent
|
gptkb:Nvidia_DGX_systems
|
gptkbp:bfsLayer
|
7
|