gptkbp:instance_of
|
gptkb:microprocessor
|
gptkbp:architecture
|
gptkb:Ampere
|
gptkbp:availability
|
global
|
gptkbp:compatibility
|
gptkb:NVIDIA_CUDA
gptkb:NVIDIA_Tensor_RT
gptkb:NVIDIA_cu_DNN
|
gptkbp:cooling_system
|
active cooling
|
gptkbp:dimensions
|
3.5 x 17.2 x 30 inches
|
gptkbp:feature
|
gptkb:Open_Stack
cloud integration
collaborative tools
data visualization tools
fault tolerance
high availability
low latency
energy efficient
containerized applications
high throughput
remote management
scalable architecture
secure boot
API support
multi-GPU support
high-speed interconnects
AI model optimization
data center ready
|
gptkbp:form_factor
|
2 U rackmount
|
gptkbp:gpu
|
gptkb:NVIDIA_A100
|
gptkbp:has_programs
|
gptkb:NVIDIA_NGC
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA DGX A100
|
gptkbp:is_scalable
|
multi-node configurations
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:network
|
gptkb:NVIDIA_Mellanox_Connect_X-6
|
gptkbp:operating_system
|
gptkb:Ubuntu
|
gptkbp:performance
|
gptkb:SPEC_CPU
gptkb:HPCG
gptkb:MLPerf
5 peta FLOPS
|
gptkbp:power_consumption
|
400 W
|
gptkbp:price
|
starting at $199,000
|
gptkbp:ram
|
320 GB HBM2
|
gptkbp:release_date
|
May 2020
|
gptkbp:storage
|
15 TB NVMe SSD
|
gptkbp:support
|
gptkb:NVIDIA_AI_Enterprise
NVIDIA support services
|
gptkbp:target_market
|
cloud service providers
enterprise
research institutions
|
gptkbp:training
|
gptkb:AI_technology
machine learning models
data science workflows
|
gptkbp:use_case
|
data analytics
deep learning
AI training
|
gptkbp:weight
|
approximately 40 kg
|
gptkbp:bfsParent
|
gptkb:NVIDIA
|
gptkbp:bfsLayer
|
4
|