gptkbp:instance_of
|
gptkb:Graphics_Processing_Unit
|
gptkbp:architecture
|
gptkb:Ampere
|
gptkbp:compatibility
|
gptkb:NVIDIA_DGX_Systems
gptkb:NVIDIA_HGX_Systems
|
gptkbp:cooling_system
|
Active Cooling
|
gptkbp:form_factor
|
gptkb:PCIe
SXM4
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA A100
|
gptkbp:is_a_framework_for
|
gptkb:Tensor_Flow
gptkb:Caffe
gptkb:MXNet
gptkb:Py_Torch
|
gptkbp:key_feature
|
Scalability
High Throughput
High Memory Bandwidth
Multi-instance GPU
Versatile Workloads
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:market_position
|
High-End GPU
|
gptkbp:memory_type
|
gptkb:HBM2
|
gptkbp:network
|
gptkb:Yes
|
gptkbp:number_of_cores
|
432
6912
|
gptkbp:pcie_gen
|
4.0
|
gptkbp:performance
|
gptkb:MLPerf
156 TFLOPS
19.5 TFLOPS
624 TOPS
SPEC ACCEL
|
gptkbp:power_consumption
|
400 W
|
gptkbp:predecessor
|
gptkb:NVIDIA_V100
|
gptkbp:provides_support_for
|
gptkb:Tensor_RT
gptkb:cu_DNN
gptkb:NVIDIA_NGC
gptkb:CUDA
|
gptkbp:ram
|
40 GB
|
gptkbp:release_date
|
gptkb:2020
|
gptkbp:successor
|
gptkb:NVIDIA_H100
|
gptkbp:target_audience
|
gptkb:researchers
Enterprises
Data Scientists
AI Developers
Cloud Providers
|
gptkbp:target_market
|
Data Centers
|
gptkbp:use_case
|
High-Performance Computing
AI Inference
AI Training
|
gptkbp:bfsParent
|
gptkb:NVIDIA_Corporation
gptkb:CUDA
gptkb:NVIDIA
|
gptkbp:bfsLayer
|
4
|