gptkbp:instance_of
|
gptkb:microprocessor
|
gptkbp:can_be_used_in
|
data centers
|
gptkbp:designed_for
|
deep learning
|
gptkbp:features
|
gptkb:NVIDIA_Tesla_V100_GPUs
|
https://www.w3.org/2000/01/rdf-schema#label
|
DGX Station
|
gptkbp:is_a_framework_for
|
gptkb:Tensor_Flow
gptkb:Caffe
gptkb:Py_Torch
|
gptkbp:is_available_in
|
multiple regions
various configurations
|
gptkbp:is_compatible_with
|
gptkb:NVIDIA_NGC
machine learning frameworks
various AI tools
|
gptkbp:is_considered_as
|
a powerful computing solution
a high-performance system
a platform for AI research
a server
a workstation
an AI development platform
|
gptkbp:is_known_for
|
gptkb:performance
scalability
ease of use
|
gptkbp:is_optimized_for
|
AI inference
AI training
|
gptkbp:is_part_of
|
gptkb:DGX_family
NVIDIA's product line
AI infrastructure
NVIDIA's AI strategy
|
gptkbp:is_supported_by
|
NVIDIA software stack
NVIDIA's customer service
|
gptkbp:is_targeted_at
|
research institutions
|
gptkbp:is_used_by
|
data scientists
|
gptkbp:is_used_for
|
gptkb:simulation
data analysis
predictive analytics
model training
|
gptkbp:is_used_in
|
healthcare
finance
automotive
retail
|
gptkbp:marketed_as
|
gptkb:educational_institutions
government agencies
enterprise customers
startups
|
gptkbp:network
|
10 Gb E
|
gptkbp:operating_system
|
gptkb:Ubuntu
|
gptkbp:power_source
|
3200 W
|
gptkbp:produced_by
|
gptkb:NVIDIA
|
gptkbp:provides
|
high-performance computing
|
gptkbp:ram
|
512 GB
|
gptkbp:released_in
|
gptkb:2017
|
gptkbp:runs_through
|
Docker containers
|
gptkbp:storage
|
8 TB SSD
|
gptkbp:supports
|
AI workloads
|
gptkbp:bfsParent
|
gptkb:NVIDIA_DGX_Systems
gptkb:DGX_systems
|
gptkbp:bfsLayer
|
5
|