Statements (30)
| Predicate | Object | 
|---|---|
| gptkbp:instanceOf | gptkb:graphics_card gptkb:Neural_Engine | 
| gptkbp:architecture | gptkb:Hopper | 
| gptkbp:coreCount | true | 
| gptkbp:formFactor | gptkb:PCIe | 
| gptkbp:FP16Performance | 1979 TFLOPS | 
| gptkbp:FP8Performance | 3958 TFLOPS | 
| gptkbp:intendedUse | AI inference AI training | 
| gptkbp:manufacturer | gptkb:Nvidia | 
| gptkbp:market | gptkb:cloud_service | 
| gptkbp:memoryBusWidth | 6.25 TB/s | 
| gptkbp:memoryType | gptkb:HBM3 | 
| gptkbp:numberOfGPUsPerBoard | 2 | 
| gptkbp:NVLinkBandwidth | 900 GB/s | 
| gptkbp:processNode | gptkb:TSMC_4N | 
| gptkbp:productType | gptkb:Nvidia_H100 | 
| gptkbp:RAM | 188 GB | 
| gptkbp:releaseYear | 2023 | 
| gptkbp:successor | gptkb:Nvidia_A100_NVL | 
| gptkbp:supportsFP16 | true | 
| gptkbp:supportsFP8 | true | 
| gptkbp:supportsMultiGPU | true | 
| gptkbp:supportsNVLink | true | 
| gptkbp:supportsPCIeGen5 | true | 
| gptkbp:supportsTransformerEngine | true | 
| gptkbp:TDP | 700 W | 
| gptkbp:bfsParent | gptkb:Nvidia_Hopper | 
| gptkbp:bfsLayer | 6 | 
| https://www.w3.org/2000/01/rdf-schema#label | Nvidia H100 NVL |