Statements (30)
Predicate | Object |
---|---|
gptkbp:instanceOf |
AI supercomputer
|
gptkbp:announced |
2022
|
gptkbp:formFactor |
Rackmount
|
gptkbp:GPU_memory_per_GPU |
80 GB HBM2e
|
https://www.w3.org/2000/01/rdf-schema#label |
NVIDIA DGX H100
|
gptkbp:manufacturer |
gptkb:NVIDIA
|
gptkbp:marketedAs |
data centers
research institutions enterprise AI |
gptkbp:network |
gptkb:8x_NVIDIA_ConnectX-7_400Gb/s_InfiniBand/Ethernet
|
gptkbp:number_of_CPUs |
2
|
gptkbp:number_of_GPUs |
8
|
gptkbp:operatingSystem |
gptkb:Ubuntu_Linux
|
gptkbp:powerSource |
10 kW redundant
|
gptkbp:processor |
gptkb:Intel_Xeon_Platinum_8480C
gptkb:NVIDIA_H100_Tensor_Core_GPU |
gptkbp:RAM |
2 TB DDR5
|
gptkbp:software_stack |
gptkb:NVIDIA_AI_Enterprise
|
gptkbp:storage |
30 TB NVMe SSD
|
gptkbp:successor |
gptkb:NVIDIA_DGX_A100
|
gptkbp:supports |
gptkb:NVIDIA_NVSwitch
gptkb:NVIDIA_NVLink |
gptkbp:total_GPU_memory |
640 GB
|
gptkbp:uses |
AI inference
AI training High-performance computing |
gptkbp:bfsParent |
gptkb:NVLink
gptkb:NVIDIA_DGX_systems gptkb:DGX |
gptkbp:bfsLayer |
6
|