gptkbp:instanceOf
|
gptkb:graphics_card
|
gptkbp:architecture
|
gptkb:Hopper
|
gptkbp:formFactor
|
gptkb:PCIe
gptkb:SXM
|
gptkbp:hasFeature
|
gptkb:HBM3_Memory
gptkb:Confidential_Computing
gptkb:NVLink_4.0
gptkb:Transformer_Engine
gptkb:PCIe_Gen5
gptkb:DPX_Instructions
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA H100 GPU
|
gptkbp:interface
|
gptkb:NVLink
gptkb:PCIe_5.0
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:market
|
gptkb:cloud_service
High Performance Computing
AI Inference
AI Training
|
gptkbp:memoryType
|
gptkb:HBM3
|
gptkbp:peakFP64Performance
|
400 TFLOPS
67 TFLOPS
|
gptkbp:peakTensorPerformance
|
2000 TFLOPS
|
gptkbp:processNode
|
gptkb:TSMC_4N
|
gptkbp:productType
|
gptkb:NVIDIA_Hopper
|
gptkbp:RAM
|
80 GB
94 GB
|
gptkbp:releaseDate
|
2022
|
gptkbp:successor
|
gptkb:NVIDIA_A100_GPU
|
gptkbp:supports
|
gptkb:CUDA
gptkb:NVIDIA_AI_Enterprise
gptkb:Multi-Instance_GPU_(MIG)
gptkb:TensorFloat-32
gptkb:NVIDIA_TensorRT
gptkb:NVIDIA_cuDNN
gptkb:NVIDIA_NCCL
gptkb:NVLink_Switch_System
BF16
FP16
INT8
FP32
FP64
Sparsity
|
gptkbp:TDP
|
700 W
|
gptkbp:transistorCount
|
80 billion
|
gptkbp:usedIn
|
gptkb:NVIDIA_HGX_H100
gptkb:NVIDIA_DGX_H100
Cloud Computing
Supercomputers
AI Clusters
|
gptkbp:bfsParent
|
gptkb:NVLink
|
gptkbp:bfsLayer
|
6
|