gptkbp:instance_of
|
gptkb:Graphics_Processing_Unit
|
gptkbp:ai
|
Up to 120 TFLOPS
|
gptkbp:architecture
|
Volta
|
gptkbp:clock_speed
|
1380 MHz
|
gptkbp:compatibility
|
gptkb:Linux
gptkb:Windows
|
gptkbp:deep_learning_performance
|
Up to 100x faster than CPU
|
gptkbp:form_factor
|
gptkb:PCIe
Dual-slot
SXM2
|
gptkbp:gpu
|
gptkb:Tesla
Compute
|
gptkbp:has_ability
|
7.0
|
gptkbp:has_units
|
80
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA Tesla V100 GPUs
|
gptkbp:is_a_framework_for
|
gptkb:Tensor_Flow
gptkb:Caffe
gptkb:Chainer
gptkb:MXNet
gptkb:Py_Torch
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:manufacturing_process
|
12 nm
|
gptkbp:market_segment
|
gptkb:enterprise_solutions
|
gptkbp:memory_type
|
gptkb:HBM2
4096 bits
|
gptkbp:network
|
gptkb:Yes
|
gptkbp:number_of_cores
|
gptkb:Yes
640
5120
|
gptkbp:pciexpress_version
|
gptkb:3.0
|
gptkbp:performance
|
125 TFLOPS
15.7 TFLOPS
7.8 TFLOPS
|
gptkbp:power_connector
|
8-pin
|
gptkbp:powers
|
300 W
|
gptkbp:ram
|
16 GB HBM2
900 GB/s
|
gptkbp:release_date
|
gptkb:2017
|
gptkbp:release_year
|
gptkb:2017
|
gptkbp:released
|
Discontinued
|
gptkbp:resolution
|
4096 x 2160
|
gptkbp:slisupport
|
gptkb:Yes
|
gptkbp:successor
|
gptkb:NVIDIA_A100
|
gptkbp:support_for_direct_x
|
No
|
gptkbp:support_for_open_cl
|
gptkb:Yes
|
gptkbp:support_for_vulkan
|
No
|
gptkbp:target_market
|
Data centers
|
gptkbp:tdp
|
300 W
|
gptkbp:transistor_count
|
21.1 billion
|
gptkbp:use_case
|
gptkb:machine_learning
gptkb:Deep_Learning
High-Performance Computing
|
gptkbp:virtualization_support
|
gptkb:Yes
|
gptkbp:bfsParent
|
gptkb:Summit_supercomputer
|
gptkbp:bfsLayer
|
5
|