gptkbp:instance_of
|
gptkb:Graphics_Processing_Unit
|
gptkbp:ai
|
gptkb:Yes
|
gptkbp:aiinference_support
|
gptkb:Yes
|
gptkbp:aitraining_support
|
gptkb:Yes
|
gptkbp:application
|
gptkb:Computer_Vision
gptkb:Natural_Language_Processing
gptkb:machine_learning
gptkb:Data_Analytics
Scientific Computing
High Performance Computing
|
gptkbp:architecture
|
Volta
|
gptkbp:compatibility
|
CUDA, Tensor RT, cu DNN
|
gptkbp:die_size
|
815 mm²
|
gptkbp:form_factor
|
gptkb:PCI_Express
gptkb:PCIe
Dual Slot
SXM2
|
gptkbp:gpu
|
gptkb:Yes
|
gptkbp:gpuarchitecture
|
Volta
|
gptkbp:has_ability
|
7.0
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA V100 GPU
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:manufacturing_process
|
12 nm
|
gptkbp:market_segment
|
gptkb:enterprise_solutions
|
gptkbp:memory_type
|
4096-bit
|
gptkbp:network
|
gptkb:Yes
|
gptkbp:number_of_cores
|
640
5120
|
gptkbp:nvswitch_support
|
gptkb:Yes
|
gptkbp:performance
|
125 TFLOPS
15.7 TFLOPS
7.8 TFLOPS
|
gptkbp:power_connector
|
8-pin
|
gptkbp:powers
|
300 W
|
gptkbp:predecessor
|
gptkb:NVIDIA_P100_GPU
|
gptkbp:ram
|
gptkb:HBM2
16 GB HBM2
900 GB/s
|
gptkbp:release_date
|
gptkb:2017
|
gptkbp:release_notes
|
Volta Architecture Whitepaper
|
gptkbp:resolution
|
7680 x 4320
|
gptkbp:slisupport
|
gptkb:Yes
|
gptkbp:successor
|
gptkb:NVIDIA_A100_GPU
|
gptkbp:target_market
|
Data Centers
|
gptkbp:tdp
|
300 W
|
gptkbp:virtualization_support
|
gptkb:Yes
|
gptkbp:bfsParent
|
gptkb:NVIDIA
|
gptkbp:bfsLayer
|
4
|