[TensorRT] Jetson Orin NX에 TensorRT 설치
1. 아래 깃허브 레포 들어가서 소스 다운로드 및 TensorRT 설치
https://github.com/NVIDIA-AI-IOT/torch2trt
GitHub - NVIDIA-AI-IOT/torch2trt: An easy to use PyTorch to TensorRT converter
An easy to use PyTorch to TensorRT converter. Contribute to NVIDIA-AI-IOT/torch2trt development by creating an account on GitHub.
github.com
pip install packaging
git clone https://github.com/NVIDIA-AI-IOT/torch2trt
cd torch2trt
sudo -E python setup.py install
2. plugin 설치
# vi CMakeLists.txt
# 다음줄 바로 아래에 추가
# set_property(TARGET torch2trt_plugins PROPERTY CUDA_ARCHITECTURES ${CUDA_ARCHITECTURES})
target_compile_options(torch2trt_plugins PRIVATE $<$<COMPILE_LANGUAGE:CUDA>:-gencode arch=compute_72,code=sm_72>)
sudo cmake -B build -DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc . && sudo cmake --build build --target install && sudo ldconfig
* 참고
https://github.com/NVIDIA-AI-IOT/torch2trt
GitHub - NVIDIA-AI-IOT/torch2trt: An easy to use PyTorch to TensorRT converter
An easy to use PyTorch to TensorRT converter. Contribute to NVIDIA-AI-IOT/torch2trt development by creating an account on GitHub.
github.com
(디버그)
"No CMAKE_CUDA_COMPILER could be found" - but nvcc is right there : CPP-29089
What steps will reproduce the issue? 1. git clone [git@github.com](mailto:git@github.com):eyalroz/gpu-kernel-runner.git 2. Make sure that you can executable "cmake" successfully in the resulting folder (i.e. that you have CUDA installed, a compatible C++ c
youtrack.jetbrains.com
https://github.com/NVIDIA-AI-IOT/torch2trt/issues/794
torch2trt/plugins/src/example_plugin.cu(27): error: identifier "__hmul" is undefined · Issue #794 · NVIDIA-AI-IOT/torch2trt
I face this error when trying to build the library by cmake: $ make Scanning dependencies of target torch2trt_plugins [ 25%] Building CUDA object CMakeFiles/torch2trt_plugins.dir/plugins/src/exampl...
github.com