Statements (33)
Predicate | Object |
---|---|
gptkbp:instanceOf |
gptkb:graphics_card
|
gptkbp:architecture |
gptkb:Ampere
|
gptkbp:formFactor |
gptkb:SXM4
gptkb:PCIe |
gptkbp:hasModel |
gptkb:A100
|
https://www.w3.org/2000/01/rdf-schema#label |
A100 GPUs
|
gptkbp:intendedUse |
gptkb:cloud_service
AI inference AI training high performance computing |
gptkbp:manufacturer |
gptkb:NVIDIA
|
gptkbp:marketedAs |
gptkb:business
research |
gptkbp:memoryType |
gptkb:HBM2
|
gptkbp:peakFP64Performance |
19.5 TFLOPS
9.7 TFLOPS |
gptkbp:peakTensorPerformance |
312 TFLOPS
|
gptkbp:predecessor |
V100 GPUs
|
gptkbp:processNode |
7nm
|
gptkbp:RAM |
40 GB
80 GB |
gptkbp:releaseYear |
2020
|
gptkbp:successor |
gptkb:H100_GPUs
|
gptkbp:supports |
gptkb:CUDA
gptkb:NVLink gptkb:PCIe_4.0 gptkb:Multi-Instance_GPU_(MIG) gptkb:NVSwitch gptkb:Tensor_Cores |
gptkbp:usedIn |
gptkb:NVIDIA_DGX_systems
cloud computing services |
gptkbp:bfsParent |
gptkb:Lambda_GPU_Cloud
|
gptkbp:bfsLayer |
7
|