gptkbp:instanceOf
|
AI supercomputer
|
gptkbp:coreCount
|
2
|
gptkbp:dimensions
|
10U rackmount
|
gptkbp:formFactor
|
Rackmount
|
gptkbp:GPU
|
gptkb:Nvidia_H100_Tensor_Core_GPU
|
gptkbp:GPUmemory
|
640 GB HBM3
|
https://www.w3.org/2000/01/rdf-schema#label
|
Nvidia DGX H100
|
gptkbp:manufacturer
|
gptkb:Nvidia
|
gptkbp:maxFP16Performance
|
32 petaFLOPS
|
gptkbp:maxFP8Performance
|
64 petaFLOPS
|
gptkbp:network
|
8x Single-Port ConnectX-7 400Gb/s InfiniBand/Ethernet
|
gptkbp:numberOfGPUs
|
8
|
gptkbp:NVSwitchBandwidth
|
900 GB/s per GPU
|
gptkbp:operatingSystem
|
gptkb:Ubuntu_Linux
|
gptkbp:powerSource
|
10 kW redundant power supply
|
gptkbp:processor
|
gptkb:Intel_Xeon_Platinum_8480C
|
gptkbp:releaseYear
|
2022
|
gptkbp:storage
|
30 TB NVMe SSD
|
gptkbp:successor
|
gptkb:Nvidia_DGX_A100
|
gptkbp:supportsNVLink
|
Yes
|
gptkbp:systemMemory
|
2 TB DDR5
|
gptkbp:targetMarket
|
Research institutions
Cloud service providers
Enterprise AI
|
gptkbp:technology
|
gptkb:Nvidia_Base_Command
gptkb:Nvidia_AI_Enterprise
|
gptkbp:uses
|
AI inference
AI training
High-performance computing
|
gptkbp:weight
|
~130 kg
|
gptkbp:bfsParent
|
gptkb:Nvidia_DGX_systems
gptkb:Nvidia_H100
|
gptkbp:bfsLayer
|
7
|