gptkbp:instance_of
|
gptkb:Graphics_Processing_Unit
|
gptkbp:ai
|
gptkb:Yes
|
gptkbp:application
|
gptkb:Autonomous_Vehicles
gptkb:Computer_Vision
gptkb:Natural_Language_Processing
gptkb:simulation
gptkb:Data_Analytics
gptkb:robotics
Scientific Computing
Financial Modeling
|
gptkbp:architecture
|
Volta
|
gptkbp:cloud_integration
|
gptkb:Yes
|
gptkbp:cuda_support
|
gptkb:Yes
|
gptkbp:die_size
|
815 mm²
|
gptkbp:form_factor
|
gptkb:PCIe
Full-height, Full-length
|
gptkbp:gpu
|
gptkb:Yes
gptkb:HBM2
|
gptkbp:gpu_virtualization
|
gptkb:NVIDIA_v_GPU
|
gptkbp:has_ability
|
7.0
|
https://www.w3.org/2000/01/rdf-schema#label
|
Tesla V100
|
gptkbp:is_a_framework_for
|
gptkb:Tensor_Flow
gptkb:Caffe
gptkb:Chainer
gptkb:MXNet
gptkb:Py_Torch
|
gptkbp:is_supported_by
|
gptkb:Yes
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:manufacturing_process
|
12 nm
|
gptkbp:network
|
gptkb:Yes
|
gptkbp:number_of_cores
|
640
5120
|
gptkbp:open_glsupport
|
gptkb:Yes
|
gptkbp:pciexpress_version
|
gptkb:3.0
|
gptkbp:performance
|
15.7 TFLOPS
7.8 TFLOPS
|
gptkbp:power_connector
|
8-pin
|
gptkbp:powers
|
300 W
|
gptkbp:ram
|
16 GB HBM2
900 GB/s
|
gptkbp:release_date
|
gptkb:2017
|
gptkbp:resolution
|
4096 x 2160
|
gptkbp:slisupport
|
gptkb:Yes
|
gptkbp:successor
|
gptkb:NVIDIA_A100
|
gptkbp:target_market
|
Data centers
|
gptkbp:tdp
|
300 W
|
gptkbp:transistor_count
|
21 billion
|
gptkbp:use_case
|
gptkb:machine_learning
gptkb:Deep_Learning
High-Performance Computing
|
gptkbp:virtualization_support
|
gptkb:Yes
|
gptkbp:vram_type
|
gptkb:HBM2
|
gptkbp:bfsParent
|
gptkb:Volta_architecture
gptkb:NVIDIA_GPUs
|
gptkbp:bfsLayer
|
5
|