TensorRT

GPTKB entity

Statements (41)
Predicate Object
gptkbp:instanceOf deep learning inference optimizer
gptkbp:category gptkb:artificial_intelligence
gptkb:software
gptkbp:developer gptkb:NVIDIA
gptkbp:documentation https://docs.nvidia.com/deeplearning/tensorrt/
gptkbp:feature dynamic tensor memory
kernel auto-tuning
layer fusion
multi-stream execution
FP16 support
DLA support
INT8 support
precision calibration
gptkbp:firstReleased 2017
https://www.w3.org/2000/01/rdf-schema#label TensorRT
gptkbp:integratesWith gptkb:CUDA
gptkb:NVIDIA_Triton_Inference_Server
gptkb:cuDNN
gptkb:NVIDIA_DeepStream
gptkb:PyTorch-TensorRT
gptkb:TensorFlow-TensorRT_(TF-TRT)
gptkbp:latestReleaseVersion 2023
10.0
gptkbp:license proprietary
gptkbp:operatingSystem gptkb:Windows
gptkb:Linux
gptkbp:platform gptkb:NVIDIA_GPUs
gptkbp:programmingLanguage gptkb:Python
gptkb:C++
gptkbp:purpose high-performance deep learning inference
gptkbp:supportsFormat gptkb:TensorFlow
gptkb:PyTorch
gptkb:ONNX
gptkbp:usedFor AI inference acceleration
gptkbp:usedIn autonomous vehicles
data centers
robotics
edge devices
gptkbp:website https://developer.nvidia.com/tensorrt
gptkbp:bfsParent gptkb:ONNX
gptkbp:bfsLayer 5