gptkbp:instance_of
|
gptkb:computer
|
gptkbp:ai
|
gptkb:Keras
gptkb:Caffe
gptkb:Chainer
gptkb:MXNet
gptkb:Oni
|
gptkbp:architecture
|
NVIDIA GPU architecture
|
gptkbp:cooling_system
|
liquid cooling
|
gptkbp:designed_for
|
deep learning
|
gptkbp:expansion_slots
|
PCIe slots
|
gptkbp:features
|
high-performance computing
|
gptkbp:form_factor
|
rack-mounted
|
gptkbp:fuel_economy
|
high
|
gptkbp:gpucount
|
8 GPUs
|
gptkbp:has_programs
|
gptkb:NVIDIA_NGC
|
https://www.w3.org/2000/01/rdf-schema#label
|
DGX systems
|
gptkbp:includes
|
gptkb:DGX_A100
gptkb:DGX_Station
|
gptkbp:is_atype_of
|
gptkb:NVIDIA_A100
|
gptkbp:is_scalable
|
multi-node
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:market_position
|
leading in AI computing
|
gptkbp:network
|
gptkb:NVIDIA_Mellanox
|
gptkbp:operating_system
|
gptkb:Linux
|
gptkbp:partnerships
|
various cloud providers
|
gptkbp:performance
|
FP32 performance
FP16 performance
INT8 performance
|
gptkbp:power_supply
|
3000 W
|
gptkbp:provides
|
AI training capabilities
|
gptkbp:ram
|
up to 1 TB
|
gptkbp:released
|
gptkb:2016
|
gptkbp:security_features
|
gptkb:TPM
secure boot
|
gptkbp:services_provided
|
NVIDIA Enterprise Support
|
gptkbp:storage
|
up to 15 TB
|
gptkbp:supports
|
gptkb:Tensor_Flow
gptkb:Py_Torch
|
gptkbp:target_market
|
data centers
research institutions
enterprise AI
|
gptkbp:training_time_reduction
|
up to 10x
|
gptkbp:use_case
|
gptkb:vehicles
gptkb:financial_services
gptkb:robotics
healthcare
data analysis
computer vision
image recognition
natural language processing
predictive analytics
smart cities
speech recognition
fraud detection
recommendation systems
|
gptkbp:uses
|
gptkb:CUDA
|
gptkbp:warranty
|
3 years
|
gptkbp:bfsParent
|
gptkb:NVIDIA
|
gptkbp:bfsLayer
|
4
|