gptkbp:instance_of
|
gptkb:microprocessor
|
gptkbp:architecture
|
NVIDIA GPU architecture
|
gptkbp:available_in
|
various configurations
|
gptkbp:competes_with
|
gptkb:IBM_Watson
gptkb:Microsoft_Azure_AI
gptkb:Google_Cloud_AI
gptkb:Sage
|
gptkbp:designed_for
|
gptkb:machine_learning
gptkb:AI_technology
deep learning
data science
|
gptkbp:features
|
data analytics
scalable architecture
model training
inference
AI frameworks
high-speed interconnects
|
gptkbp:form_factor
|
rack-mounted
2 U
|
gptkbp:has_programs
|
gptkb:NVIDIA_GPU_Cloud
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA DGX systems
|
gptkbp:includes
|
gptkb:NVIDIA_GPUs
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:model
|
gptkb:DGX-1
|
gptkbp:network
|
gptkb:NVIDIA_Mellanox
|
gptkbp:operating_system
|
gptkb:Linux
|
gptkbp:part_of
|
gptkb:NVIDIA_Cloud_Services
gptkb:NVIDIA_Data_Center
gptkb:NVIDIA_Deep_Learning
gptkb:NVIDIA_AI_Enterprise
gptkb:NVIDIA_Research
NVIDIA Ecosystem
NVIDIA Developer Program
NVIDIA Accelerated Computing
NVIDIA Partner Network
|
gptkbp:power_consumption
|
up to 3 k W
|
gptkbp:ram
|
up to 1 TB
|
gptkbp:released
|
gptkb:2016
|
gptkbp:storage
|
up to 30 TB
|
gptkbp:successor
|
gptkb:DGX_A100
|
gptkbp:supports
|
multi-GPU configurations
AI workloads
|
gptkbp:target_market
|
enterprise
research institutions
|
gptkbp:uses
|
gptkb:Tensor_Flow
gptkb:Caffe
gptkb:MXNet
gptkb:NVIDIA_NVLink
gptkb:NVIDIA_RAPIDS
gptkb:Py_Torch
|
gptkbp:bfsParent
|
gptkb:NVIDIA
|
gptkbp:bfsLayer
|
4
|