gptkbp:instance_of
|
gptkb:microprocessor
|
gptkbp:ai
|
gptkb:Tensor_Flow
gptkb:Caffe
gptkb:MXNet
gptkb:Py_Torch
|
gptkbp:aiinference_performance
|
up to 10x faster than CPU
|
gptkbp:aitraining_performance
|
up to 20x faster than CPU
|
gptkbp:architecture
|
gptkb:Ampere
|
gptkbp:cloud_integration
|
gptkb:Yes
|
gptkbp:compatibility
|
gptkb:NVIDIA_NGC
NVIDIA AI software stack
|
gptkbp:cooling_system
|
liquid cooling
|
gptkbp:dimensions
|
24.5 x 17.5 x 8.5 inches
|
gptkbp:expansion_slots
|
PCIe 4.0
|
gptkbp:form_factor
|
gptkb:computer
desktop
|
gptkbp:gpu
|
gptkb:NVIDIA_A100
8
|
gptkbp:gpuarchitecture
|
gptkb:Ampere
|
gptkbp:has_ability
|
8.0
|
gptkbp:has_programs
|
gptkb:NVIDIA_DGX_OS
|
https://www.w3.org/2000/01/rdf-schema#label
|
DGX Station A100
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:network
|
gptkb:NVIDIA_Mellanox_Connect_X-6
gptkb:Yes
|
gptkbp:number_of_cores
|
gptkb:Yes
6912
|
gptkbp:operating_system
|
gptkb:Ubuntu
|
gptkbp:performance
|
5 peta FLOPS
|
gptkbp:power_consumption
|
3000 W
|
gptkbp:power_output
|
AC 100-240 V
|
gptkbp:predecessor
|
gptkb:DGX_Station_A200
|
gptkbp:price
|
varies by configuration
|
gptkbp:ram
|
320 GB
1555 GB/s
|
gptkbp:release_date
|
gptkb:2020
|
gptkbp:released
|
GTC 2020
|
gptkbp:remote_control
|
gptkb:Yes
|
gptkbp:security_features
|
gptkb:TPM_2.0
|
gptkbp:services_provided
|
NVIDIA Enterprise Support
|
gptkbp:slisupport
|
gptkb:Yes
|
gptkbp:storage
|
15 TB NVMe SSD
|
gptkbp:successor
|
gptkb:DGX_Station_V100
|
gptkbp:support
|
NVIDIA support services
|
gptkbp:target_market
|
data centers
|
gptkbp:target_use_case
|
AI training
|
gptkbp:training_time_reduction
|
up to 90%
|
gptkbp:use_case
|
gptkb:machine_learning
data analytics
deep learning
high-performance computing
|
gptkbp:virtualization_support
|
gptkb:NVIDIA_v_GPU
|
gptkbp:warranty
|
3 years
|
gptkbp:weight
|
approximately 100 kg
|
gptkbp:bfsParent
|
gptkb:NVIDIA's_DGX_Station
gptkb:NVIDIA_DGX_portfolio
|
gptkbp:bfsLayer
|
6
|