Statements (50)
Predicate | Object |
---|---|
gptkbp:instance_of |
gptkb:Graphics_Processing_Unit
|
gptkbp:bfsLayer |
7
|
gptkbp:bfsParent |
gptkb:Tesla_A100
|
gptkbp:architectural_style |
gptkb:Hopper
|
gptkbp:availability |
Global
|
gptkbp:cooling_system |
Active Cooling
|
gptkbp:form_factor |
PC Ie
|
gptkbp:gpu |
5.0
|
https://www.w3.org/2000/01/rdf-schema#label |
Tesla H100
|
gptkbp:intelligence |
1000 TOPS
|
gptkbp:is_compatible_with |
NVIDIACUDA
NVIDIAAI Enterprise |
gptkbp:key |
Energy Efficiency
Scalability Low Latency Enhanced Security High Throughput Dynamic Memory Allocation AI Model Optimization Support for Graph Neural Networks Support for Reinforcement Learning Support for Federated Learning Multi-instance GPU Support for Large Language Models Support for Transformer Models Support for Quantum Computing Support for Edge AI |
gptkbp:manufacturer |
gptkb:DJ
|
gptkbp:market |
$30,000
|
gptkbp:network |
gptkb:battle
|
gptkbp:number_of_cores |
256
8192 |
gptkbp:performance |
30 TFLOPS
ML Perf 60 TFLOPS |
gptkbp:power_consumption |
350 W
|
gptkbp:predecessor |
gptkb:Tesla_A100
|
gptkbp:produced_by |
4nm
|
gptkbp:ram |
80 GBHB M3
2 TB/s |
gptkbp:release_year |
gptkb:2022
|
gptkbp:released |
GTC 2022
|
gptkbp:successor |
Tesla H200
|
gptkbp:target_market |
Data Centers
|
gptkbp:targets |
gptkb:Company
gptkb:Deep_Learning Scientific Computing |
gptkbp:use_case |
gptkb:software_framework
High-Performance Computing AI Training |