gptkbp:instance_of
|
gptkb:Graphics_Processing_Unit
|
gptkbp:ai
|
30 TFLOPS
|
gptkbp:architecture
|
Volta
|
gptkbp:clock_speed
|
1380 MHz
|
gptkbp:compatibility
|
CUDA 9.0 and above
|
gptkbp:die_size
|
815 mm²
|
gptkbp:form_factor
|
gptkb:PCIe
Dual-slot
SXM2
SXM3
|
gptkbp:fuel_economy
|
gptkb:Yes
|
gptkbp:gpu
|
gptkb:Tesla
gptkb:HBM2
Server GPU
|
gptkbp:has_ability
|
7.0
|
gptkbp:has_units
|
80
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA Tesla V100
|
gptkbp:is_a_framework_for
|
gptkb:Tensor_Flow
gptkb:Caffe
gptkb:Py_Torch
|
gptkbp:launch_event
|
gptkb:NVIDIA_GPU_Technology_Conference_2017
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:market_launch
|
gptkb:2017
|
gptkbp:market_segment
|
AI and Deep Learning
|
gptkbp:memory_type
|
4096-bit
|
gptkbp:network
|
gptkb:Yes
|
gptkbp:number_of_cores
|
640
5120
|
gptkbp:performance
|
gptkb:Data_Analytics
Scientific Computing
AI Inference
AI Training
15.7 TFLOPS
7.8 TFLOPS
7.5 FP16 TFLOPS
|
gptkbp:power_connector
|
8-pin PCIe
|
gptkbp:powers
|
300 W
|
gptkbp:provides_support_for
|
gptkb:NVIDIA_CUDA_Toolkit
|
gptkbp:ram
|
16 GB HBM2
900 GB/s
|
gptkbp:release_date
|
gptkb:2017
|
gptkbp:release_notes
|
gptkb:NVIDIA_Volta_Architecture
|
gptkbp:slisupport
|
gptkb:Yes
|
gptkbp:successor
|
gptkb:NVIDIA_A100
|
gptkbp:target_market
|
Data centers
|
gptkbp:tdp
|
300 W
|
gptkbp:thermal_design_power
|
300 W
|
gptkbp:transistor_count
|
21.1 billion
|
gptkbp:use_case
|
gptkb:machine_learning
gptkb:Deep_Learning
High-Performance Computing
|
gptkbp:virtualization_support
|
gptkb:Yes
|
gptkbp:bfsParent
|
gptkb:NVIDIA
|
gptkbp:bfsLayer
|
4
|