|
gptkbp:instanceOf
|
gptkb:deep_learning_inference_optimizer
|
|
gptkbp:category
|
gptkb:artificial_intelligence
gptkb:software
|
|
gptkbp:developer
|
gptkb:NVIDIA
|
|
gptkbp:documentation
|
https://docs.nvidia.com/deeplearning/tensorrt/
|
|
gptkbp:feature
|
dynamic tensor memory
kernel auto-tuning
layer fusion
multi-stream execution
FP16 support
DLA support
INT8 support
precision calibration
|
|
gptkbp:firstReleased
|
2017
|
|
gptkbp:integratesWith
|
gptkb:CUDA
gptkb:NVIDIA_Triton_Inference_Server
gptkb:cuDNN
gptkb:NVIDIA_DeepStream
gptkb:PyTorch-TensorRT
gptkb:TensorFlow-TensorRT_(TF-TRT)
|
|
gptkbp:latestReleaseVersion
|
2023
10.0
|
|
gptkbp:license
|
proprietary
|
|
gptkbp:operatingSystem
|
gptkb:Windows
gptkb:Linux
|
|
gptkbp:platform
|
gptkb:NVIDIA_GPUs
|
|
gptkbp:programmingLanguage
|
gptkb:Python
gptkb:C++
|
|
gptkbp:purpose
|
high-performance deep learning inference
|
|
gptkbp:supportsFormat
|
gptkb:TensorFlow
gptkb:PyTorch
gptkb:ONNX
|
|
gptkbp:usedFor
|
AI inference acceleration
|
|
gptkbp:usedIn
|
autonomous vehicles
data centers
robotics
edge devices
|
|
gptkbp:website
|
https://developer.nvidia.com/tensorrt
|
|
gptkbp:bfsParent
|
gptkb:NVIDIA_Jetson
gptkb:Nvidia_DeepStream
gptkb:Nvidia_Jetson
gptkb:Nvidia_TensorRT
gptkb:Nvidia_Triton_Inference_Server
|
|
gptkbp:bfsLayer
|
6
|
|
https://www.w3.org/2000/01/rdf-schema#label
|
TensorRT
|