gptkbp:instanceOf
|
gptkb:graphics_card
|
gptkbp:architecture
|
gptkb:Ampere
|
gptkbp:coreCount
|
6912
|
gptkbp:formFactor
|
gptkb:SXM4_module
PCIe card
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA A100 GPU
|
gptkbp:interface
|
gptkb:NVLink
gptkb:PCIe_4.0
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:market
|
gptkb:cloud_service
cloud computing
|
gptkbp:memoryBusWidth
|
1555 GB/s
|
gptkbp:memoryType
|
gptkb:HBM2e
|
gptkbp:predecessor
|
gptkb:NVIDIA_V100_GPU
|
gptkbp:processNode
|
7 nm
|
gptkbp:RAM
|
40 GB
80 GB
|
gptkbp:releaseDate
|
2020
|
gptkbp:successor
|
gptkb:NVIDIA_H100_GPU
|
gptkbp:supports
|
gptkb:NVIDIA_GPU_Direct
gptkb:OpenGL
gptkb:Multi-Instance_GPU_(MIG)
gptkb:NVLink_3.0
gptkb:TensorFloat-32
gptkb:NVIDIA_TensorRT
gptkb:NVIDIA_cuDNN
gptkb:PCIe_Gen4
gptkb:NVIDIA_NCCL
gptkb:OpenCL
gptkb:SR-IOV
gptkb:Vulkan
gptkb:DirectX_12
gptkb:ECC_memory
gptkb:NVIDIA_CUDA
gptkb:Tensor_Cores
virtualization
secure boot
FP16
FP32
FP64
hardware root of trust
|
gptkbp:TDP
|
400 W
|
gptkbp:transistorCount
|
54.2 billion
|
gptkbp:usedFor
|
high-performance computing
data analytics
AI inference
AI training
|
gptkbp:usedIn
|
AI research
supercomputers
cloud services
|
gptkbp:bfsParent
|
gptkb:NVIDIA_Virtual_Server
gptkb:NVLink
|
gptkbp:bfsLayer
|
6
|