Other Parts Discussed in Thread: TDA4VM
在pc端编译好模型生成
./model-artifacts
./models
并复制到开发板后,
我在tda4vm开发板上运行:
/opt/edgeai-tidl-tools/examples/osrt_python/ort# python3 onnxrt_p.py
报错显示:
Available execution providers : ['TIDLExecutionProvider', 'TIDLCompilationProv]
Running 4 Models - ['cl-ort-resnet18-v1', 'cl-ort-caffe_squeezenet_v1_1', 'ss-o]
Running_Model : cl-ort-resnet18-v1
Traceback (most recent call last):
File "onnxrt_ep.py", line 251, in <module>
run_model(model, mIdx)
File "onnxrt_ep.py", line 165, in run_model
sess = rt.InferenceSession(config['model_path'] ,providers=EP_list, provide)
File "/usr/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_
self._create_inference_session(providers, provider_options)
File "/usr/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inferencen
sess = C.InferenceSession(session_options, self._model_path, True, self._re)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL.
官方文档里对tidl_tools_path说明是:set to environment variable TIDL_TOOLS_PATH, usually psdk_rtos_install/tidl_xx_yy_zz_ww/ti_dl/tidl_tools
但我没有在tda4里找到这个路径,只有相似的这个:/usr/include/processor_sdk/tidl_j721e_08_02_00_11/ti_dl/inc,所以将tidl_tools_path设置为了这个,不知道是不是这个原因,但我找不到正确的路径
请问这个问题该如何解决?