gptkbp:instanceOf
|
gptkb:graphics_card
|
gptkbp:architecture
|
gptkb:Hopper
|
gptkbp:formFactor
|
gptkb:PCIe
gptkb:SXM
|
gptkbp:hasFeature
|
gptkb:Confidential_Computing
gptkb:NVLink_4.0
gptkb:Transformer_Engine
gptkb:PCIe_Gen5
gptkb:DPX_Instructions
Secure Multi-Tenancy
|
https://www.w3.org/2000/01/rdf-schema#label
|
Nvidia H100 GPU
|
gptkbp:interface
|
gptkb:NVLink
gptkb:PCIe_5.0
|
gptkbp:manufacturer
|
gptkb:Nvidia
|
gptkbp:marketedAs
|
cloud computing
data centers
supercomputers
|
gptkbp:memoryType
|
gptkb:HBM3
|
gptkbp:partOf
|
gptkb:Nvidia_Hopper_series
|
gptkbp:peakFP64Performance
|
1,979 TFLOPS
67 TFLOPS
|
gptkbp:peakTensorPerformance
|
3,958 TFLOPS
|
gptkbp:processNode
|
gptkb:TSMC_4N
|
gptkbp:RAM
|
80 GB
94 GB
|
gptkbp:releaseDate
|
2022
|
gptkbp:successor
|
gptkb:Nvidia_A100_GPU
|
gptkbp:supports
|
gptkb:NVIDIA_AI_Enterprise
gptkb:Confidential_Computing
gptkb:Multi-Instance_GPU_(MIG)
gptkb:NVIDIA_HPC_SDK
gptkb:TensorFloat-32
gptkb:NVIDIA_TensorRT
gptkb:NVIDIA_cuDNN
gptkb:PCIe_Gen5
gptkb:CUDA_12.x
gptkb:NVLink_Switch_System
BF16
FP16
INT8
FP32
FP64
Sparsity
|
gptkbp:TDP
|
700 W
|
gptkbp:transistorCount
|
80 billion
|
gptkbp:usedFor
|
AI inference
AI training
high performance computing
|
gptkbp:bfsParent
|
gptkb:Nvidia_NVLink
|
gptkbp:bfsLayer
|
6
|