Statements (29)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:graphics_card
|
| gptkbp:architecture |
gptkb:Hopper
|
| gptkbp:formFactor |
gptkb:PCIe
gptkb:SXM |
| gptkbp:manufacturer |
gptkb:NVIDIA
|
| gptkbp:memoryType |
gptkb:HBM3
|
| gptkbp:peakFP64Performance |
up to 4,000 TFLOPS
|
| gptkbp:peakFP8Performance |
up to 8,000 TFLOPS
|
| gptkbp:processNode |
gptkb:TSMC_4N
|
| gptkbp:productType |
gptkb:NVIDIA_Hopper
|
| gptkbp:RAM |
up to 80 GB
|
| gptkbp:releaseYear |
2022
|
| gptkbp:successor |
gptkb:A100_GPUs
|
| gptkbp:supports |
gptkb:NVLink
gptkb:NVSwitch gptkb:PCIe_Gen5 gptkb:Tensor_Cores FP16 precision FP8 precision |
| gptkbp:targetMarket |
AI inference
AI training high performance computing |
| gptkbp:transistorCount |
80 billion
|
| gptkbp:usedIn |
gptkb:NVIDIA_DGX_H100
cloud computing supercomputers |
| gptkbp:bfsParent |
gptkb:Lambda_GPU_Cloud
|
| gptkbp:bfsLayer |
7
|
| https://www.w3.org/2000/01/rdf-schema#label |
H100 GPUs
|