gptkbp:instanceOf
|
AI supercomputer
|
gptkbp:countryOfOrigin
|
gptkb:United_States
|
gptkbp:designedFor
|
deep learning
data analytics
large language models
|
gptkbp:formFactor
|
Rackmount
|
gptkbp:GPU
|
gptkb:NVIDIA_H100_Tensor_Core_GPU
|
gptkbp:GPU_memory
|
640 GB HBM3
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA DGX H100 system
|
gptkbp:introduced
|
2022
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:marketedAs
|
research institutions
cloud service providers
enterprise AI
|
gptkbp:max_GPU-to-GPU_bandwidth
|
900 GB/s
|
gptkbp:microarchitecture
|
gptkb:NVIDIA_Hopper
|
gptkbp:network
|
gptkb:8x_NVIDIA_ConnectX-7_400Gb/s_InfiniBand/Ethernet
|
gptkbp:notableFeature
|
integrated system management
liquid cooling option
high-density compute
multi-node scalability
|
gptkbp:number_of_GPUs
|
8
|
gptkbp:operatingSystem
|
gptkb:Ubuntu_Linux
|
gptkbp:powerSource
|
10 kW redundant
|
gptkbp:processor
|
gptkb:Intel_Xeon_Platinum_8480C
|
gptkbp:rack_units
|
10U
|
gptkbp:RAM
|
2 TB DDR5
|
gptkbp:software_stack
|
gptkb:NVIDIA_AI_Enterprise
gptkb:NVIDIA_Base_Command
NVIDIA NGC containers
|
gptkbp:storage
|
30 TB NVMe SSD
|
gptkbp:successor
|
gptkb:NVIDIA_DGX_A100
|
gptkbp:supports
|
gptkb:Ethernet
gptkb:InfiniBand
gptkb:NVIDIA_NVSwitch
gptkb:NVIDIA_NVLink
gptkb:PCIe_Gen5
|
gptkbp:targetMarket
|
government labs
universities
enterprises
|
gptkbp:uses
|
high-performance computing
AI inference
AI training
|
gptkbp:weight
|
~130 kg
|
gptkbp:bfsParent
|
gptkb:NVLink_4.0
|
gptkbp:bfsLayer
|
7
|