Description I have been having issues with getting a .onnx model to build using either TensorRT or trtexec.exe.
Share, comment, bookmark or report
Maskrcnn ONNX example - TensorRT - NVIDIA Developer Forums. AI & Data Science Deep Learning (Training & Inference) TensorRT. rb19082013 January 28, 2022, 6:17am 1. Please provide complete information as applicable to your setup. • Hardware Platform (Jetson / GPU)
Share, comment, bookmark or report
"input_1:0": I have created a working yolo_v4_tiny model. It can infere with tao infere command. But the problem with trtexec remains the same.
Share, comment, bookmark or report
I’m using a laptop to convert an onnx model to engine model, and then run the engine model on a gpu. My laptop’s GPU is “NVIDIA GeForce RTX 3060 Laptop GPU“, which’s compute capability is 8.6.” The engine model converted above can run on my laptop. But it can’t run on another PC,which‘s GPU is “NVIDIA GeForce GTX 1660 Ti” and compute capability is 7.5. The exception show ...
Share, comment, bookmark or report
Hi, Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Share, comment, bookmark or report
可以看到大部分算子已经实现过, 仅剩下 Pad 算子需要映射开发 ,那接下来就是研究 和分析 MindSpore 中的 Pad 算子和 Onnx 中 Pad 算子的输入,输出以及属性之间存在哪些差异 , 然后根据分析结果确定算子映射开发流程 。
Share, comment, bookmark or report
Request you to share the ONNX model and the script if not shared already so that we can assist you better. Alongside you can try few things: validating your model with the below snippet; check_model.py. import sys import onnx filename = yourONNXmodel model = onnx.load(filename) onnx.checker.check_model(model). 2) Try running your model with ...
Share, comment, bookmark or report
System Information: Operating System: Windows Server 2022 Python Version: 3.10 ONNX Runtime Version: 1.12.0 CUDA Toolkit Version: 11.4 cuDNN Version: Compatible version for CUDA 11.4 NVIDIA Driver Version: 470 GPU Model: NVIDIA Quadro K6000 Issue Description: I am facing an issue while trying to use the ONNX Runtime with GPU (onnxruntime-gpu) on my Windows Server 2022 setup. Specifically, I ...
Share, comment, bookmark or report
Our team has encountered an output mismatch issue when converting models from ONNX to TensorRT, despite using TRT 32-bit where we don’t anticipate accuracy discrepancies. Here are the details of our situation: Hardware: NVIDIA Orin Dev kit, running as Orin NX 16GB. Software: We are utilizing the latest compatible versions of ONNX and TensorRT.
Share, comment, bookmark or report
If you want to do kernel profiling, you can use Nsight Compute directly. Regarding if Nsight system can support bin, I am moving this topic to “Nsight System”. Shashankg, you will want to run Nsys over the model as it executes to get performance information. Beginner here to Nsight. Lets assume I have an ONNX model.
Share, comment, bookmark or report
Comments