gptkbp:instance_of
|
gptkb:Graphics_Processing_Unit
|
gptkbp:ai
|
gptkb:DALL-E
gptkb:GPT-3
gptkb:BERT
gptkb:Stable_Diffusion
gptkb:CLIP
up to 1 exaflop
|
gptkbp:application
|
computer vision
natural language processing
reinforcement learning
graph analytics
recommendation systems
|
gptkbp:architecture
|
gptkb:Hopper
|
gptkbp:compatibility
|
gptkb:NVIDIA_Triton_Inference_Server
gptkb:Tensor_RT
gptkb:cu_DNN
gptkb:CUDA
|
gptkbp:connects
|
gptkb:NVLink
|
gptkbp:deep_learning_libraries
|
gptkb:NVIDIA_DALI
gptkb:NVIDIA_Jarvis
gptkb:NVIDIA_NGC
gptkb:NVIDIA_RAPIDS
gptkb:NVIDIA_Clara
|
gptkbp:ecc
|
gptkb:Yes
|
gptkbp:form_factor
|
gptkb:PCIe
SXM
|
gptkbp:has_ability
|
8.9
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA H100 Tensor Core GPUs
|
gptkbp:is_a_framework_for
|
gptkb:Tensor_Flow
gptkb:Caffe
gptkb:Chainer
gptkb:MXNet
gptkb:Py_Torch
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:manufacturing_process
|
4nm
|
gptkbp:multi-instance_gpu
|
gptkb:Yes
|
gptkbp:number_of_cores
|
gptkb:Yes
18432
|
gptkbp:nvlink_version
|
4.0
|
gptkbp:nvswitch
|
gptkb:Yes
|
gptkbp:pciexpress_version
|
5.0
|
gptkbp:performance
|
up to 30 teraflops
up to 60 teraflops
|
gptkbp:power_consumption
|
300 watts
|
gptkbp:ram
|
2 TB/s
80 GB HBM3
|
gptkbp:release_date
|
gptkb:2022
|
gptkbp:support
|
gptkb:NVIDIA_AI_Enterprise
|
gptkbp:target_market
|
data centers
|
gptkbp:use_case
|
deep learning
high-performance computing
AI training
|
gptkbp:virtualization_support
|
gptkb:Yes
|
gptkbp:bfsParent
|
gptkb:NVIDIA_Morpheus
gptkb:NVLink
|
gptkbp:bfsLayer
|
5
|