gptkbp:instanceOf
|
gptkb:library
|
gptkbp:collectiveOperation
|
gptkb:broadcaster
reduce
all-gather
all-reduce
reduce-scatter
|
gptkbp:developer
|
gptkb:NVIDIA
|
gptkbp:documentation
|
https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/
|
gptkbp:firstReleased
|
2016
|
gptkbp:fullName
|
gptkb:NVIDIA_Collective_Communications_Library
|
https://www.w3.org/2000/01/rdf-schema#label
|
NCCL
|
gptkbp:latestReleaseVersion
|
2024-03-19
2.19.3
|
gptkbp:license
|
gptkb:BSD_3-Clause
|
gptkbp:operatingSystem
|
gptkb:Linux
|
gptkbp:programmingLanguage
|
gptkb:CUDA
gptkb:C++
|
gptkbp:purpose
|
multi-GPU communication
distributed deep learning
|
gptkbp:repository
|
https://github.com/NVIDIA/nccl
|
gptkbp:supports
|
gptkb:NVIDIA_GPUs
collective operations
multi-node communication
|
gptkbp:usedBy
|
gptkb:TensorFlow
gptkb:MXNet
gptkb:DeepSpeed
gptkb:PyTorch
gptkb:Horovod
|
gptkbp:website
|
https://developer.nvidia.com/nccl
|
gptkbp:bfsParent
|
gptkb:USS_Lake_Champlain_(CG-57)
gptkb:NVIDIA_AI_Accelerated_Libraries
gptkb:NVIDIA_AI_Research
gptkb:NVIDIA_CUDA
gptkb:英伟达
|
gptkbp:bfsLayer
|
6
|