Statements (57)
Predicate | Object |
---|---|
gptkbp:instance_of |
gptkb:Graphics_Processing_Unit
|
gptkbp:ai |
1,000 TFLOPS
|
gptkbp:architecture |
gptkb:Hopper
|
gptkbp:form_factor |
gptkb:PCIe
SXM |
https://www.w3.org/2000/01/rdf-schema#label |
NVIDIA H100
|
gptkbp:key_feature |
Low Latency
Enhanced Security High Throughput Scalable Architecture Dynamic Memory Management Support for Kubernetes Support for ONNX Support for Vulkan Support for Virtualization Support for CUDA Support for Cloud Computing Support for AI Workloads Support for HPC Workloads Support for Containerization Support for Direct X Support for Open CL Multi-instance GPU Support for Open GL Support for Tensor Flow Support for AI Frameworks Support for Caffe Support for Large Language Models Support for Py Torch Support for Tensor RT Support for Transformer Models Support for cu DNN Support for MXNet |
gptkbp:launch_event |
GTC 2022
|
gptkbp:manufacturer |
gptkb:NVIDIA
|
gptkbp:manufacturing_process |
4 nm
|
gptkbp:memory_type |
gptkb:HBM3
|
gptkbp:network |
gptkb:Yes
|
gptkbp:number_of_cores |
528
16864 |
gptkbp:pciexpress_version |
5.0
|
gptkbp:performance |
30 TFLOPS
60 TFLOPS |
gptkbp:power_consumption |
300 W
|
gptkbp:predecessor |
gptkb:NVIDIA_A100
|
gptkbp:ram |
80 GB
2 TB/s |
gptkbp:release_date |
gptkb:2022
|
gptkbp:successor |
gptkb:NVIDIA_H200
|
gptkbp:target_market |
Data Centers
|
gptkbp:use_case |
gptkb:machine_learning
gptkb:Deep_Learning High-Performance Computing AI Training |
gptkbp:bfsParent |
gptkb:NVIDIA_Corporation
gptkb:NVIDIA |
gptkbp:bfsLayer |
4
|