Statements (36)
| Predicate | Object | 
|---|---|
| gptkbp:instanceOf | gptkb:graphics_card | 
| gptkbp:architecture | gptkb:Hopper | 
| gptkbp:coreCount | 16896 | 
| gptkbp:formFactor | gptkb:SXM | 
| gptkbp:interface | gptkb:SXM5 | 
| gptkbp:manufacturer | gptkb:NVIDIA | 
| gptkbp:memoryBusWidth | 5120-bit | 
| gptkbp:memoryType | gptkb:HBM3 | 
| gptkbp:processNode | gptkb:TSMC_4N | 
| gptkbp:productType | gptkb:NVIDIA_H100 | 
| gptkbp:RAM | 80 GB 94 GB | 
| gptkbp:releaseYear | 2022 | 
| gptkbp:successor | NVIDIA A100 SXM | 
| gptkbp:supports | gptkb:NVLink gptkb:Multi-Instance_GPU_(MIG) gptkb:TensorFloat-32 gptkb:PCIe_Gen5 BF16 FP16 FP32 FP64 Sparsity | 
| gptkbp:targetMarket | gptkb:cloud_service AI Inference AI Training | 
| gptkbp:TDP | 700 W | 
| gptkbp:Tensor_Cores | 528 | 
| gptkbp:transistorCount | 80 billion | 
| gptkbp:bfsParent | gptkb:NVIDIA_Hopper_GPU gptkb:NVIDIA_Hopper_architecture gptkb:H100_series gptkb:Hopper_GPU gptkb:NVIDIA_Hopper_GPUs | 
| gptkbp:bfsLayer | 8 | 
| https://www.w3.org/2000/01/rdf-schema#label | NVIDIA H100 SXM |