gptkbp:instanceOf
|
collective communications library
|
gptkbp:abbreviation
|
gptkb:NVIDIA_Collective_Communications_Library
|
gptkbp:communicationBackendFor
|
distributed deep learning frameworks
|
gptkbp:developer
|
gptkb:NVIDIA
|
gptkbp:enables
|
gptkb:broadcaster
reduce
all-gather
all-reduce
collective operations
reduce-scatter
|
gptkbp:firstReleased
|
2016
|
https://www.w3.org/2000/01/rdf-schema#label
|
NVIDIA NCCL
|
gptkbp:integratesWith
|
gptkb:TensorFlow
gptkb:MXNet
gptkb:PyTorch
gptkb:Horovod
|
gptkbp:latestReleaseVersion
|
2024-03-19
2.19.3
|
gptkbp:license
|
gptkb:BSD_3-Clause_License
|
gptkbp:operatingSystem
|
gptkb:Linux
|
gptkbp:optimizedFor
|
gptkb:NVIDIA_GPUs
|
gptkbp:programmingLanguage
|
gptkb:CUDA
gptkb:C++
|
gptkbp:supports
|
multi-GPU communication
multi-node communication
|
gptkbp:usedFor
|
deep learning
high-performance computing
|
gptkbp:website
|
https://developer.nvidia.com/nccl
|
gptkbp:bfsParent
|
gptkb:NVIDIA_DGX-1
gptkb:H100
gptkb:Nvidia_H100
gptkb:NVIDIA_A100_GPU
gptkb:NVIDIA_H100
gptkb:NVIDIA_H100_GPU
|
gptkbp:bfsLayer
|
7
|