工具与软件:
尊敬的 TI 团队:
我正在使用 edgeai-tidl-tools SDK、并编译10.01.04.00分支、以使用 AM62A7板上的模型。 在编译 SDK 时、我面临这个问题。
以下是我正在执行的步骤:
# Setup on X86_PC (ubuntu 22.04, python version - Python 3.10.12) git clone github.com/.../edgeai-tidl-tools.git cd edgeai-tidl-tools git checkout 10_01_04_00 export SOC=am62a source ./setup.sh # Compile and Validate on X86_PC mkdir build && cd build source ./scripts/run_python_examples.sh python3 ./scripts/gen_test_report.py
运行source ./scripts/run_python_examples.sh
""命令时、Ubuntu 出现以下错误:
X64 Architecture 1 Running 4 Models - ['cl-tfl-mobilenet_v1_1.0_224', 'ss-tfl-deeplabv3_mnv2_ade20k_float', 'od-tfl-ssd_mobilenet_v2_300_float', 'od-tfl-ssdlite_mobiledet_dsp_320x320_coco'] Running_Model : cl-tfl-mobilenet_v1_1.0_224 Running_Model : ss-tfl-deeplabv3_mnv2_ade20k_float Running_Model : od-tfl-ssd_mobilenet_v2_300_float Running_Model : od-tfl-ssdlite_mobiledet_dsp_320x320_coco Downloading ../../../models/public/deeplabv3_mnv2_ade20k_float.tflite ========================= [Model Compilation Started] ========================= Model compilation will perform the following stages: 1. Parsing 2. Graph Optimization 3. Quantization & Calibration 4. Memory Planning ============================== [Version Summary] ============================== ------------------------------------------------------------------------------- | TIDL Tools Version | 10_01_04_00 | ------------------------------------------------------------------------------- | C7x Firmware Version | 10_01_00_01 | ------------------------------------------------------------------------------- ============================== [Parsing Started] ============================== [TIDL Import] [PARSER] WARNING: Network not identified as Object Detection network : (1) Ignore if network is not Object Detection network (2) If network is Object Detection network, please specify "model_type":"OD" as part of OSRT compilation options Total Nodes = 31 ------------------------------------------------------------------------------- | Core | No. of Nodes | Number of Subgraphs | ------------------------------------------------------------------------------- | C7x | 31 | 1 | | CPU | 0 | x | ------------------------------------------------------------------------------- ============================= [Parsing Completed] ============================= ==================== [Optimization for subgraph_86 started] ==================== Process Process-2: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/admin/JayK/tidl-tools/edgeai-tidl-tools/examples/osrt_python/tfl/tflrt_delegate.py", line 240, in run_model download_model(models_configs, model) File "/home/admin/JayK/tidl-tools/edgeai-tidl-tools/examples/osrt_python/common_utils.py", line 294, in download_model tflOpt.tidlTfliteModelOptimize( NameError: name 'tflOpt' is not defined [TIDL Import] [PARSER] WARNING: Requested output data convert layer is not added to the network, It is currently not optimal ----------------------------- Optimization Summary ----------------------------- -------------------------------------------------------------------------------- | Layer | Nodes before optimization | Nodes after optimization | -------------------------------------------------------------------------------- | TIDL_SoftMaxLayer | 1 | 1 | | TIDL_SqueezeLayer | 1 | 0 | | TIDL_ConvolutionLayer | 28 | 28 | | TIDL_PoolingLayer | 1 | 1 | -------------------------------------------------------------------------------- =================== [Optimization for subgraph_86 completed] =================== /home/admin/JayK/tidl-tools/edgeai-tidl-tools/tools/AM62A/tidl_tools/tidl_graphVisualiser.out: error while loading shared libraries: libcgraph.so.6: cannot open shared object file: No such file or directory [TIDL Import] WARNING: System command failed with return code : 32512. Skipping Graph Visualization. The soft limit is 10240 The hard limit is 10240 MEM: Init ... !!! MEM: Init ... Done !!! 0.0s: VX_ZONE_INIT:Enabled 0.34s: VX_ZONE_ERROR:Enabled 0.38s: VX_ZONE_WARNING:Enabled 0.48340s: VX_ZONE_INIT:[tivxInit:190] Initialization Done !!! ************ Frame index 1 : Running float inference **************** ************ Frame index 2 : Running fixed point mode for calibration **************** -------- Running Calibration in Float Mode to Collect Tensor Statistics -------- [=============================================================================] 100 % ------------------ Fixed-point Calibration Iteration [1 / 5]: ------------------ [TIDL Import] ERROR: Failed to run calibration pass, system command returned error: 132 -- [tidl_import_core.cpp, 678] [TIDL Import] ERROR: Failed to run Calibration - Failed in function: tidlRunQuantStatsTool -- [tidl_import_core.cpp, 1746] [TIDL Import] [QUANTIZATION] ERROR: - Failed in function: TIDL_quantStatsFixedOrFloat -- [tidl_import_quantize.cpp, 3992] [TIDL Import] [QUANTIZATION] ERROR: - Failed in function: TIDL_runIterativeCalibration -- [tidl_import_quantize.cpp, 4313] [TIDL Import] [QUANTIZATION] ERROR: - Failed in function: TIDL_import_quantize -- [tidl_import_quantize.cpp, 5195] [TIDL Import] ERROR: - Failed in function: TIDL_import_backend -- [tidl_import_core.cpp, 4428] [TIDL Import] ERROR: - Failed in function: TIDL_runtimesPostProcessNet -- [tidl_runtimes_import_common.cpp, 1414] Completed_Model : 1, Name : cl-tfl-mobilenet_v1_1.0_224 , Total time : 3555.15, Offload Time : 0.00 , DDR RW MBs : 18446744073709.55, Output Image File : py_out_cl-tfl-mobilenet_v1_1.0_224_ADE_val_00001801.jpg, Output Bin File : py_out_cl-tfl-mobilenet_v1_1.0_224_ADE_val_00001801.bin MEM: Deinit ... !!! MEM: Alloc's: 26 alloc's of 68565333 bytes MEM: Free's : 26 free's of 68565333 bytes MEM: Open's : 0 allocs of 0 bytes MEM: Deinit ... Done !!!
因此、"定点校准迭代"失败、每次都失败。 我不确定那是正确的、生成的模型和模式伪影是正确的? 如果我必须在此处进行任何更改、请告诉我。
此外、我遇到了我无法解决的以下误差。
2025-02-20 14:19:37.717610752 [E:onnxruntime:, sequential_executor.cc:514 ExecuteKernel] Non-zero status code returned while running TIDL_0 node. Name:'TIDLExecutionProvider_TIDL_0_0' Status Message: TIDL Compute Invoke Failed. Process Process-2: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/admin/JayK/tidl-tools/edgeai-tidl-tools/examples/osrt_python/ort/onnxrt_ep.py", line 392, in run_model imgs, output, proc_time, sub_graph_time, height, width = infer_image(sess, input_images, config) File "/home/admin/JayK/tidl-tools/edgeai-tidl-tools/examples/osrt_python/ort/onnxrt_ep.py", line 208, in infer_image output = list(sess.run(None, {input_name: input_data})) File "/home/admin/.local/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 217, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running TIDL_0 node. Name:'TIDLExecutionProvider_TIDL_0_0' Status Message: TIDL Compute Invoke Failed. MEM: Deinit ... !!! MEM: Alloc's: 27 alloc's of 208282052 bytes MEM: Free's : 27 free's of 208282052 bytes MEM: Open's : 0 allocs of 0 bytes MEM: Deinit ... Done !!!
此致、
Jay