Statements (32)
Predicate | Object |
---|---|
gptkbp:instance_of |
gptkb:Graphics_Processing_Unit
|
gptkbp:ai |
Transformer models
|
gptkbp:architecture |
gptkb:Hopper
|
gptkbp:compatibility |
gptkb:Tensor_RT
gptkb:cu_DNN gptkb:NVIDIA_AI_Enterprise gptkb:CUDA |
gptkbp:form_factor |
gptkb:PCIe
|
https://www.w3.org/2000/01/rdf-schema#label |
H100 Tensor Core GPU
|
gptkbp:interface |
gptkb:PCIe_5.0
|
gptkbp:manufacturer |
gptkb:NVIDIA
|
gptkbp:network |
gptkb:Yes
|
gptkbp:number_of_cores |
gptkb:Yes
18432 |
gptkbp:nvswitch |
gptkb:Yes
|
gptkbp:performance |
up to 120 TFLOPS
up to 240 TOPS up to 30 TFLOPS up to 60 TFLOPS |
gptkbp:power_consumption |
300 W
|
gptkbp:ram |
80 GB HBM3
|
gptkbp:release_date |
gptkb:2022
|
gptkbp:support |
Multi-instance GPU (MIG)
|
gptkbp:target_market |
Data centers
Enterprises Research institutions Cloud providers |
gptkbp:use_case |
deep learning
high-performance computing AI training |
gptkbp:bfsParent |
gptkb:NVIDIA
|
gptkbp:bfsLayer |
4
|