gptkbp:instance_of
|
gptkb:Graphics_Processing_Unit
|
gptkbp:ai
|
gptkb:Tensor_Flow
gptkb:High
gptkb:Keras
gptkb:Caffe
gptkb:DL4_J
gptkb:CNTK
gptkb:Chainer
gptkb:MXNet
gptkb:Paddle_Paddle
gptkb:Py_Torch
gptkb:Theano
|
gptkbp:architecture
|
Volta
|
gptkbp:availability
|
Widely Available
|
gptkbp:compatibility
|
gptkb:Linux
gptkb:Windows
|
gptkbp:cooling_system
|
Active Cooling
|
gptkbp:die_size
|
815 mm²
|
gptkbp:ecc
|
gptkb:Yes
|
gptkbp:form_factor
|
gptkb:servers
gptkb:PCIe
SXM2
|
gptkbp:gpu
|
gptkb:Yes
|
gptkbp:gpuarchitecture
|
Volta
|
gptkbp:has_ability
|
7.0
|
gptkbp:has_units
|
80
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA V100 GPUs
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:manufacturing_process
|
12 nm
|
gptkbp:market_segment
|
gptkb:enterprise_solutions
|
gptkbp:memory_type
|
gptkb:HBM2
|
gptkbp:network
|
gptkb:Yes
|
gptkbp:number_of_cores
|
640
5120
|
gptkbp:pciexpress_version
|
gptkb:3.0
|
gptkbp:performance
|
125 TFLOPS
15.7 TFLOPS
|
gptkbp:power_connector
|
8-pin
|
gptkbp:price
|
gptkb:High
|
gptkbp:ram
|
gptkb:HBM2
16 GB HBM2
900 GB/s
|
gptkbp:release_date
|
gptkb:2017
|
gptkbp:successor
|
gptkb:NVIDIA_A100
|
gptkbp:support
|
gptkb:NVIDIA_CUDA
gptkb:NVIDIA_Tensor_RT
gptkb:NVIDIA_cu_DNN
gptkb:NVIDIA_NCCL
|
gptkbp:target_market
|
Data Centers
|
gptkbp:tdp
|
300 W
|
gptkbp:transistor_count
|
21 billion
|
gptkbp:use_case
|
gptkb:machine_learning
gptkb:Deep_Learning
High Performance Computing
|
gptkbp:virtualization_support
|
gptkb:Yes
|
gptkbp:bfsParent
|
gptkb:A100_Tensor_Core_GPU
|
gptkbp:bfsLayer
|
5
|