Statements (31)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:graphics_card
|
| gptkbp:architecture |
gptkb:Hopper
|
| gptkbp:coreCount |
16896
|
| gptkbp:formFactor |
gptkb:SXM
|
| gptkbp:interface |
gptkb:SXM5
|
| gptkbp:manufacturer |
gptkb:Nvidia
|
| gptkbp:memoryBusWidth |
3 TB/s
|
| gptkbp:memoryType |
gptkb:HBM3
|
| gptkbp:processNode |
gptkb:TSMC_4N
|
| gptkbp:productType |
gptkb:Nvidia_H100
|
| gptkbp:RAM |
80 GB
94 GB |
| gptkbp:releaseYear |
2022
|
| gptkbp:supports |
gptkb:NVLink_4.0
gptkb:PCIe_Gen5 BF16 FP16 INT8 FP32 FP64 TF32 Sparsity |
| gptkbp:TDP |
700 W
|
| gptkbp:Tensor_Cores |
528
|
| gptkbp:transistorCount |
80 billion
|
| gptkbp:uses |
AI inference
AI training High Performance Computing |
| gptkbp:bfsParent |
gptkb:Nvidia_Hopper
|
| gptkbp:bfsLayer |
6
|
| https://www.w3.org/2000/01/rdf-schema#label |
Nvidia H100 SXM
|