gptkbp:instanceOf
|
gptkb:graphics_card
|
gptkbp:application
|
AI inference
AI training
High Performance Computing
|
gptkbp:architecture
|
gptkb:Hopper
|
gptkbp:energyEfficiency
|
improved over A100
|
gptkbp:formFactor
|
gptkb:PCIe
gptkb:SXM
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA H100 GPUs
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:market
|
data centers
cloud service providers
enterprise AI
|
gptkbp:memoryBusWidth
|
3 TB/s
|
gptkbp:memoryType
|
gptkb:HBM3
|
gptkbp:notableUser
|
gptkb:OpenAI
gptkb:Tesla
gptkb:Google_Cloud
gptkb:NVIDIA_Selene_supercomputer
gptkb:Amazon_Web_Services
gptkb:Meta
gptkb:Microsoft_Azure
|
gptkbp:peakFP64Performance
|
400 TFLOPS
67 TFLOPS
|
gptkbp:peakINT8Performance
|
800 TOPS
|
gptkbp:peakTF32Performance
|
200 TFLOPS
|
gptkbp:processNode
|
gptkb:TSMC_4N
|
gptkbp:RAM
|
80 GB
|
gptkbp:releaseYear
|
2022
|
gptkbp:successor
|
gptkb:NVIDIA_A100_GPUs
|
gptkbp:supports
|
gptkb:CUDA
gptkb:NVLink
gptkb:Confidential_Computing
gptkb:Multi-Instance_GPU_(MIG)
gptkb:Transformer_Engine
gptkb:NVSwitch
gptkb:PCIe_Gen5
gptkb:NVLink_Switch_System
gptkb:Tensor_Cores
FP16
INT8
FP32
FP64
TF32
|
gptkbp:transistorCount
|
80 billion
|
gptkbp:usedIn
|
gptkb:NVIDIA_DGX_H100
cloud computing
supercomputers
|
gptkbp:bfsParent
|
gptkb:JUPITER_supercomputer
gptkb:NVIDIA_NVLink_Switch_System
gptkb:NVIDIA_DGX_Cloud
gptkb:NVIDIA_A100_GPUs
|
gptkbp:bfsLayer
|
7
|