gptkbp:instance_of
|
gptkb:Graphics_Processing_Unit
|
gptkbp:bfsLayer
|
4
|
gptkbp:bfsParent
|
gptkb:Tensor_Cores
|
gptkbp:architectural_style
|
gptkb:Hopper
|
gptkbp:connects
|
gptkb:battle
|
gptkbp:form_factor
|
SXM
PC Ie
|
gptkbp:gpu
|
5.0
|
gptkbp:has_ability
|
8.0
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA H100 GPU
|
gptkbp:intelligence
|
gptkb:Generative_Adversarial_Networks
gptkb:television_channel
gptkb:Recurrent_Neural_Networks
gptkb:Transformer_Models
Reinforcement Learning Models
Up to 1.5x over A100
|
gptkbp:is_compatible_with
|
gptkb:lake
gptkb:fortification
gptkb:operating_system
V Mware
|
gptkbp:latest_version
|
4.0
|
gptkbp:manufacturer
|
gptkb:DJ
|
gptkbp:network
|
gptkb:battle
|
gptkbp:number_of_cores
|
528
16864
|
gptkbp:performance
|
30 TFLOPS
60 TFLOPS
|
gptkbp:power_output
|
8-pin
|
gptkbp:price
|
Varies by configuration
|
gptkbp:produced_by
|
4 nm
|
gptkbp:ram
|
80 GBHB M3
2 TB/s
|
gptkbp:release_date
|
gptkb:2022
|
gptkbp:successor
|
gptkb:NVIDIA_H200_GPU
|
gptkbp:supports
|
gptkb:NVIDIA_Tensor_RT
gptkb:NVIDIA_cu_DNN
gptkb:NVIDIA_Triton_Inference_Server
NVIDIACUDA
NVIDIAAI Enterprise
|
gptkbp:target_market
|
Enterprises
Research Institutions
Data Centers
Cloud Providers
|
gptkbp:tdp
|
300 W
|
gptkbp:use_case
|
gptkb:Company
gptkb:Deep_Learning
High-Performance Computing
AI Training
|
gptkbp:virtualization_support
|
gptkb:battle
|