GPTKB
Browse
Query
Compare
Download
Publications
Contributors
Search
Nvidia H100
URI:
https://gptkb.org/entity/Nvidia_H100
GPTKB entity
Statements (53)
Predicate
Object
gptkbp:instanceOf
gptkb:graphics_card
gptkbp:architecture
gptkb:Hopper
gptkbp:coreCount
16896
gptkbp:formFactor
gptkb:PCIe
gptkb:SXM
https://www.w3.org/2000/01/rdf-schema#label
Nvidia H100
gptkbp:manufacturer
gptkb:Nvidia
gptkbp:memoryType
gptkb:HBM3
gptkbp:NVLink
Supported
gptkbp:PCIe_Version
gptkb:PCIe_5.0
gptkbp:peakFP64Performance
400 TFLOPS
67 TFLOPS
gptkbp:peakINT8Performance
1979 TOPS
gptkbp:peakTF32Performance
989 TFLOPS
gptkbp:processNode
gptkb:TSMC_4N
gptkbp:RAM
80 GB
94 GB
gptkbp:releaseDate
2022
gptkbp:successor
gptkb:Nvidia_A100
gptkbp:supports
gptkb:NVIDIA_AI_Enterprise
gptkb:Confidential_Computing
gptkb:Multi-Instance_GPU_(MIG)
gptkb:NVIDIA_Triton_Inference_Server
gptkb:NVIDIA_RAPIDS
gptkb:NVIDIA_TensorRT
gptkb:NVIDIA_cuDNN
gptkb:NVSwitch
gptkb:PCIe_Gen5
gptkb:NVIDIA_DeepStream
gptkb:NVIDIA_Magnum_IO
gptkb:NVIDIA_NCCL
gptkb:NVLink_Switch_System
gptkb:Secure_Boot
gptkb:NVIDIA_CUDA
Virtualization
FP16
INT8
FP32
FP64
ECC Memory
TF32
Sparsity
gptkbp:targetMarket
gptkb:cloud_service
AI Inference
AI Training
gptkbp:TDP
350 W
700 W
gptkbp:Tensor_Cores
528
gptkbp:transistorCount
80 billion
gptkbp:usedIn
gptkb:Nvidia_DGX_H100
gptkb:Nvidia_HGX_H100
gptkbp:bfsParent
gptkb:Nvidia_Hopper
gptkbp:bfsLayer
6