Statements (56)
Predicate | Object |
---|---|
gptkbp:instance_of |
gptkb:Graphics_Processing_Unit
|
gptkbp:bfsLayer |
3
|
gptkbp:bfsParent |
gptkb:DJ
|
gptkbp:architectural_style |
gptkb:Hopper
|
gptkbp:events |
GTC 2022
|
gptkbp:form_factor |
SXM
PC Ie |
gptkbp:gpu |
5.0
|
https://www.w3.org/2000/01/rdf-schema#label |
NVIDIA H100
|
gptkbp:intelligence |
1,000 TFLOPS
|
gptkbp:key |
Low Latency
Enhanced Security High Throughput Scalable Architecture Dynamic Memory Management Support for Kubernetes Support for ONNX Support for Vulkan Support for Virtualization Support for CUDA Support for Cloud Computing Support for AI Workloads Support for HPC Workloads Support for Containerization Support for Direct X Support for Open CL Multi-instance GPU Support for Open GL Support for Tensor Flow Support for AI Frameworks Support for Caffe Support for Large Language Models Support for MX Net Support for Py Torch Support for Tensor RT Support for Transformer Models Support for cu DNN |
gptkbp:manufacturer |
gptkb:DJ
|
gptkbp:network |
gptkb:battle
|
gptkbp:number_of_cores |
528
16864 |
gptkbp:performance |
30 TFLOPS
60 TFLOPS |
gptkbp:power_consumption |
300 W
|
gptkbp:predecessor |
gptkb:NVIDIA_A100
|
gptkbp:produced_by |
4 nm
|
gptkbp:ram |
80 GB
2 TB/s HB M3 |
gptkbp:release_date |
gptkb:2022
|
gptkbp:successor |
gptkb:NVIDIA_H200
|
gptkbp:target_market |
Data Centers
|
gptkbp:use_case |
gptkb:software_framework
gptkb:Deep_Learning High-Performance Computing AI Training |