gptkbp:instance_of
|
gptkb:Graphics_Processing_Unit
|
gptkbp:ai
|
gptkb:Generative_Adversarial_Networks
gptkb:neural_networks
gptkb:Recurrent_Neural_Networks
gptkb:Transformer_Models
Reinforcement Learning Models
Up to 1.5x over A100
|
gptkbp:architecture
|
gptkb:Hopper
|
gptkbp:compatibility
|
gptkb:Kubernetes
gptkb:VMware
gptkb:Linux
gptkb:Docker
gptkb:Windows
|
gptkbp:form_factor
|
gptkb:PCIe
SXM
|
gptkbp:has_ability
|
8.0
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA H100 GPU
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:manufacturing_process
|
4 nm
|
gptkbp:network
|
gptkb:Yes
|
gptkbp:number_of_cores
|
528
16864
|
gptkbp:nvswitch
|
gptkb:Yes
|
gptkbp:pciexpress_version
|
5.0
|
gptkbp:performance
|
30 TFLOPS
60 TFLOPS
|
gptkbp:power_connector
|
8-pin
|
gptkbp:price
|
Varies by configuration
|
gptkbp:ram
|
2 TB/s
80 GB HBM3
|
gptkbp:release_date
|
gptkb:2022
|
gptkbp:successor
|
gptkb:NVIDIA_H200_GPU
|
gptkbp:support
|
gptkb:NVIDIA_CUDA
gptkb:NVIDIA_Tensor_RT
gptkb:NVIDIA_cu_DNN
gptkb:NVIDIA_Triton_Inference_Server
gptkb:NVIDIA_AI_Enterprise
|
gptkbp:sxmversion
|
4.0
|
gptkbp:target_market
|
Enterprises
Research Institutions
Data Centers
Cloud Providers
|
gptkbp:tdp
|
300 W
|
gptkbp:use_case
|
gptkb:Deep_Learning
gptkb:Data_Analytics
High-Performance Computing
AI Training
|
gptkbp:virtualization_support
|
gptkb:Yes
|
gptkbp:bfsParent
|
gptkb:Tensor_Cores
|
gptkbp:bfsLayer
|
5
|