gptkbp:instance_of
|
gptkb:microprocessor
|
gptkbp:ai
|
gptkb:NVIDIA_Deep_Learning_SDK
|
gptkbp:aiinference
|
optimized
|
gptkbp:aitraining
|
optimized
|
gptkbp:application
|
gptkb:medical_imaging
gptkb:vehicles
gptkb:robotics
gptkb:AI_technology
computer vision
natural language processing
|
gptkbp:architecture
|
gptkb:NVIDIA_Pascal
|
gptkbp:cloud_integration
|
yes
|
gptkbp:collaboration
|
gptkb:NVIDIA_NGC
|
gptkbp:community
|
active
|
gptkbp:compatibility
|
NVIDIA DGX software stack
|
gptkbp:cooling_system
|
air-cooled
|
gptkbp:country
|
gptkb:1
|
gptkbp:developed_by
|
gptkb:NVIDIA
|
gptkbp:dimensions
|
3.5 x 17.5 x 30 inches
|
gptkbp:features
|
8 NVIDIA Tesla P100 GPUs
|
gptkbp:form_factor
|
rack-mounted
2 U rackmount
|
gptkbp:fuel_economy
|
high
|
gptkbp:gpuarchitecture
|
Volta
|
gptkbp:has_programs
|
gptkb:NVIDIA_GPU_Cloud
|
gptkbp:has_research_center
|
yes
|
https://www.w3.org/2000/01/rdf-schema#label
|
DGX-1
|
gptkbp:is_a_framework_for
|
gptkb:Tensor_Flow
gptkb:Caffe
gptkb:MXNet
gptkb:Py_Torch
|
gptkbp:is_scalable
|
high
|
gptkbp:market_segment
|
high-performance computing
|
gptkbp:network
|
10 Gb E
|
gptkbp:operating_system
|
gptkb:Ubuntu_Linux
|
gptkbp:part_of
|
gptkb:NVIDIA_DGX_family
|
gptkbp:performance
|
high performance in AI tasks
up to 170 teraflops
|
gptkbp:platforms
|
gptkb:Kubernetes
gptkb:Docker
|
gptkbp:power_consumption
|
3 k W
|
gptkbp:power_output
|
100-240 V AC
|
gptkbp:price
|
approximately $149,000
|
gptkbp:ram
|
512 GB
|
gptkbp:released_in
|
gptkb:2016
|
gptkbp:security_features
|
hardware-based security
|
gptkbp:storage
|
1.92 TB SSD
|
gptkbp:successor
|
gptkb:DGX_A100
|
gptkbp:supports
|
gptkb:NVIDIA_CUDA
|
gptkbp:target_audience
|
research institutions
|
gptkbp:target_market
|
enterprise AI
|
gptkbp:user_interface
|
remote management
|
gptkbp:uses
|
deep learning
|
gptkbp:virtualization_support
|
yes
|
gptkbp:warranty
|
3 years
|
gptkbp:weight
|
approximately 60 kg
|
gptkbp:bfsParent
|
gptkb:NVIDIA_DGX_systems
gptkb:NVIDIA_DGX_Systems
|
gptkbp:bfsLayer
|
5
|