GPTKB
Browse
Query
Compare
Download
Publications
Contributors
Search
H100
URI:
https://gptkb.org/entity/H100
GPTKB entity
Statements (53)
Predicate
Object
gptkbp:instanceOf
gptkb:graphics_card
gptkbp:architecture
gptkb:Hopper
gptkbp:CUDAComputeCapability
9.0
gptkbp:formFactor
PCIe card
SXM module
https://www.w3.org/2000/01/rdf-schema#label
H100
gptkbp:interface
gptkb:NVLink
gptkb:PCIe_5.0
gptkbp:manufacturer
gptkb:NVIDIA
gptkbp:marketedAs
gptkb:NVIDIA_H100_Tensor_Core_GPU
gptkbp:memoryType
gptkb:HBM3
gptkbp:powerSource
350W
700W
gptkbp:processNode
gptkb:TSMC_4N
gptkbp:RAM
80 GB
94 GB
gptkbp:releaseYear
2022
gptkbp:successor
gptkb:A100
gptkbp:supports
gptkb:NVIDIA_AI_Enterprise
gptkb:NVIDIA_Omniverse
gptkb:Confidential_Computing
gptkb:Multi-Instance_GPU_(MIG)
gptkb:NVIDIA_Triton_Inference_Server
gptkb:NVLink_4.0
gptkb:TensorFloat-32
gptkb:NVIDIA_Base_Command
gptkb:NVIDIA_RAPIDS
gptkb:NVIDIA_TensorRT
gptkb:NVIDIA_cuDNN
gptkb:NVSwitch
gptkb:PCIe_Gen5
gptkb:NVIDIA_Magnum_IO
gptkb:NVIDIA_NCCL
gptkb:NVLink_Switch_System
gptkb:Secure_Boot
gptkb:NVIDIA_CUDA
Virtualization
BF16
FP16
INT8
FP32
FP64
ECC Memory
Sparsity
gptkbp:transistorCount
80 billion
gptkbp:usedIn
gptkb:NVIDIA_HGX_H100
gptkb:NVIDIA_DGX_H100
gptkbp:uses
AI inference
AI training
High Performance Computing
gptkbp:bfsParent
gptkb:NVIDIA_GPUs
gptkb:英伟达
gptkbp:bfsLayer
6