Statements (43)
Predicate | Object |
---|---|
gptkbp:instanceOf |
gptkb:graphics_card
|
gptkbp:architecture |
gptkb:Ampere
|
gptkbp:coreCount |
432
6912 |
gptkbp:formFactor |
gptkb:SXM4
gptkb:PCIe |
https://www.w3.org/2000/01/rdf-schema#label |
Nvidia A100 Tensor Core GPU
|
gptkbp:interface |
gptkb:NVLink
gptkb:PCIe_4.0 |
gptkbp:manufacturer |
gptkb:Nvidia
|
gptkbp:market |
cloud computing
data centers supercomputing |
gptkbp:memoryBusWidth |
1555 GB/s
|
gptkbp:memoryType |
gptkb:HBM2
|
gptkbp:notebookCheckScore |
high
|
gptkbp:predecessor |
gptkb:Nvidia_V100
|
gptkbp:processNode |
7nm
|
gptkbp:productType |
gptkb:Nvidia_Data_Center_GPUs
|
gptkbp:RAM |
40 GB
80 GB |
gptkbp:releaseDate |
May 2020
|
gptkbp:successor |
gptkb:Nvidia_H100
|
gptkbp:supports |
gptkb:CUDA
gptkb:NVIDIA_GPU_Direct gptkb:Multi-Instance_GPU_(MIG) gptkb:NVLink_3.0 gptkb:PCIe_Gen4 gptkb:ECC_memory gptkb:Tensor_Cores FP16 INT8 FP32 FP64 TF32 |
gptkbp:TDP |
400W
|
gptkbp:transistorCount |
54.2 billion
|
gptkbp:uses |
gptkb:HPC
data analytics AI inference AI training |
gptkbp:bfsParent |
gptkb:Nvidia_HGX
|
gptkbp:bfsLayer |
7
|