Statements (32)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:AI_supercomputer
|
| gptkbp:announced |
2022
|
| gptkbp:formFactor |
Rackmount
|
| gptkbp:GPU_memory_per_GPU |
80 GB HBM2e
|
| gptkbp:manufacturer |
gptkb:NVIDIA
|
| gptkbp:marketedAs |
data centers
research institutions enterprise AI |
| gptkbp:network |
gptkb:8x_NVIDIA_ConnectX-7_400Gb/s_InfiniBand/Ethernet
|
| gptkbp:number_of_CPUs |
2
|
| gptkbp:number_of_GPUs |
8
|
| gptkbp:operatingSystem |
gptkb:Ubuntu_Linux
|
| gptkbp:powerSource |
10 kW redundant
|
| gptkbp:processor |
gptkb:Intel_Xeon_Platinum_8480C
gptkb:NVIDIA_H100_Tensor_Core_GPU |
| gptkbp:RAM |
2 TB DDR5
|
| gptkbp:software_stack |
gptkb:NVIDIA_AI_Enterprise
|
| gptkbp:storage |
30 TB NVMe SSD
|
| gptkbp:successor |
gptkb:NVIDIA_DGX_A100
|
| gptkbp:supports |
gptkb:NVIDIA_NVSwitch
gptkb:NVIDIA_NVLink |
| gptkbp:total_GPU_memory |
640 GB
|
| gptkbp:uses |
AI inference
AI training High-performance computing |
| gptkbp:bfsParent |
gptkb:DGX_Systems
gptkb:NVLink gptkb:NVLink_5.0 gptkb:NVIDIA_DGX_systems gptkb:DGX |
| gptkbp:bfsLayer |
7
|
| https://www.w3.org/2000/01/rdf-schema#label |
NVIDIA DGX H100
|