gptkbp:instanceOf
|
gptkb:graphics_card
|
gptkbp:architecture
|
gptkb:Hopper
|
gptkbp:formFactor
|
gptkb:PCIe
gptkb:SXM
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA Tesla H100
|
gptkbp:intendedUse
|
gptkb:cloud_service
High Performance Computing
AI Inference
AI Training
|
gptkbp:interface
|
gptkb:NVLink
gptkb:PCIe_5.0
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:marketedAs
|
Supercomputing
Cloud Computing
|
gptkbp:memoryBusWidth
|
3 TB/s
|
gptkbp:memoryType
|
gptkb:HBM3
|
gptkbp:partOfSeries
|
gptkb:NVIDIA_Tesla
|
gptkbp:peakFP64Performance
|
~400 TFLOPS
~67 TFLOPS
|
gptkbp:peakTensorPerformance
|
~2000 TFLOPS
|
gptkbp:powerSource
|
350W (PCIe)
700W (SXM)
|
gptkbp:processNode
|
gptkb:TSMC_4N
|
gptkbp:RAM
|
80 GB
|
gptkbp:releaseDate
|
2022
|
gptkbp:successor
|
gptkb:NVIDIA_A100
|
gptkbp:supports
|
gptkb:NVIDIA_AI_Enterprise
gptkb:Confidential_Computing
gptkb:Multi-Instance_GPU_(MIG)
gptkb:TensorFloat-32
gptkb:NVIDIA_TensorRT
gptkb:NVIDIA_NVLink
gptkb:PCIe_Gen5
gptkb:NVIDIA_Magnum_IO
gptkb:NVIDIA_NCCL
gptkb:NVLink_Switch_System
gptkb:SR-IOV
gptkb:Secure_Boot
gptkb:NVIDIA_CUDA
gptkb:NVIDIA_GPUDirect
Virtualization
BF16
FP16
INT8
FP32
FP64
ECC Memory
Sparsity
|
gptkbp:transistorCount
|
80 billion
|
gptkbp:bfsParent
|
gptkb:NVIDIA_NVLink
|
gptkbp:bfsLayer
|
8
|