gptkbp:instanceOf
|
gptkb:graphics_card
|
gptkbp:architecture
|
gptkb:Volta
|
gptkbp:boostClock
|
1380 MHz
|
gptkbp:coreCount
|
640
5120
|
gptkbp:formFactor
|
PCIe card
SXM2 module
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA V100 GPU
|
gptkbp:interface
|
gptkb:NVLink
gptkb:PCIe_3.0
|
gptkbp:manufacturer
|
gptkb:NVIDIA
|
gptkbp:memoryBusWidth
|
900 GB/s
|
gptkbp:memoryType
|
gptkb:HBM2
|
gptkbp:NVLinkBandwidth
|
300 GB/s
|
gptkbp:peakFP64Performance
|
14 TFLOPS
7 TFLOPS
|
gptkbp:peakTensorPerformance
|
112 TFLOPS
|
gptkbp:predecessor
|
gptkb:NVIDIA_P100_GPU
|
gptkbp:processNode
|
12nm
|
gptkbp:productType
|
gptkb:Tesla
|
gptkbp:RAM
|
16 GB
32 GB
|
gptkbp:releaseDate
|
2017
|
gptkbp:speed
|
1230 MHz
|
gptkbp:successor
|
gptkb:NVIDIA_A100_GPU
|
gptkbp:supports
|
gptkb:CUDA
gptkb:TensorFlow
gptkb:NVLink
gptkb:Deep_Learning_Accelerator
gptkb:DirectCompute
gptkb:OpenCL
gptkb:PyTorch
gptkb:Tensor_Cores
Virtualization
FP16
FP32
FP64
Multi-GPU
|
gptkbp:supportsECCMemory
|
supported
|
gptkbp:targetMarket
|
gptkb:artificial_intelligence
gptkb:cloud_service
high performance computing
|
gptkbp:TDP
|
250 W
|
gptkbp:transistorCount
|
21.1 billion
|
gptkbp:usedIn
|
gptkb:machine_learning
cloud computing
supercomputers
|
gptkbp:bfsParent
|
gptkb:NVIDIA_Virtual_Server
gptkb:NVLink
|
gptkbp:bfsLayer
|
6
|