主题中讨论的其他器件:TDA4VM
工具与软件:
嗨、团队:
我正在尝试在我的主机 PC 中创建 model-artcs
- 我的电路板中的 SDK 版本- 10.1 - edegeai 映像
- 主机 PC - Ubuntu 22.04
- Python - 3.10
- 我使用的模型为- resnet18_opset9.onnx (可从 Model Zoo 下载)
我下载 edgeAi-tidl-tools repo 从- https://github.com/TexasInstruments/edgeai-tidl-tools/tree/master ,克隆在主机 PC ,并运行" 来源./ setup.sh "、 我得到了 tidl_tools 获取文件。 我创建了一个模型编译脚本、其中包含来自- https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/custom_model_evaluation.md#custom-model-evaluation 的选项参考
下面是编写的脚本
import onnxruntime as rt
import numpy as np
import os
# Configuration (adjust paths as needed)
options = {}
#options['tidl_tools_path'] =os.environ['TIDL_TOOLS_PATH']
options['artifacts_folder'] = '/home/user/onnx/model_artifacts_tb32' # Path where compiled artifacts are saved
options['tidl_tools_path'] = '/home/user/edgeai-tidl-tools/tidl_tools' # Path to TIDL tools (compiler)
options["model_type"]="Classification"
options['tensor_bits']=32
#options['advanced_options:add_data_convert_ops']= 1
# Load the model and configuration
model_path = "/home/user/onnx/regnetx-200mf.onnx" # Replace with your compiled model path
options['debug_level'] = 3
# Create session options (optional settings like logging, optimizations, etc.)
#so = rt.SessionOptions()
# List of execution providers, including TIDL and CPU (can also add GPU)
ep_list = ['TIDLCompilationProvider', 'CPUExecutionProvider']
ort_session_opt = rt.SessionOptions()
ort_session_opt.intra_op_num_threads = 1 # 4
# Create the InferenceSession with the specified execution providers
sess = rt.InferenceSession(
model_path,
providers=ep_list,
provider_options=[options, {}], # Provide the options for TIDL and CPU (if necessary)
sess_options=ort_session_opt
)
# Get input names to format input data correctly
#input_names = [input.name for input in sess.get_inputs()]
input_details = sess.get_inputs()
#print(input_details)
# Prepare the input data (ensure it matches the shape and type expected by the model)
# For example, if the model expects an image input of shape (batch_size, channels, height, width)
#input_data = np.random.randn(1, 640, 480, 1).astype(np.float32) # Example input (random data)
input_data = np.random.randn(1, 3, 224, 224).astype(np.float32) # Example input (random data)
# Create a dictionary mapping input names to their corresponding input data
input_data = np.clip(input_data *255 , 0, 255).astype(np.uint8)
inputs = {input_details[0].name: input_data}
# Run inference with the compiled model
outputs = sess.run(None, inputs)
# Print or process the outputs as needed
print(outputs) # This prints the output, usually as numpy arrays
print("==========================")
使用此脚本可以创建模型工件。 使用相同的运行时 API、我创建了一个推理脚本。 该模型将使用创建的模型工件在 ti tda4vm 电路板中运行推理。 但在运行过程中遇到类似的错误
REMOTE_SERVICE: Init ... Done !!!
2338.478761 s: GTC Frequency = 200 MHz
APP: Init ... Done !!!
2338.478897 s: VX_ZONE_INFO: Globally Enabled VX_ZONE_ERROR
2338.478908 s: VX_ZONE_INFO: Globally Enabled VX_ZONE_WARNING
2338.478922 s: VX_ZONE_INFO: Globally Enabled VX_ZONE_INFO
2338.486276 s: VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-0
2338.486522 s: VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-1
2338.486616 s: VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-2
2338.486705 s: VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-3
2338.486717 s: VX_ZONE_INFO: [tivxInitLocal:126] Initialization Done !!!
2338.486738 s: VX_ZONE_INFO: Globally Disabled VX_ZONE_INFO
2338.565053 s: VX_ZONE_ERROR: [ownContextSendCmd:912] Command ack message returned failure cmd_status: -1
2338.565086 s: VX_ZONE_ERROR: [ownNodeKernelInit:604] Target kernel, TIVX_CMD_NODE_CREATE failed for node TIDLNode
2338.565101 s: VX_ZONE_ERROR: [ownNodeKernelInit:605] Please be sure the target callbacks have been registered for this core
2338.565111 s: VX_ZONE_ERROR: [ownNodeKernelInit:606] If the target callbacks have been registered, please ensure no errors are occurring within the create callback of this kernel
2338.565122 s: VX_ZONE_ERROR: [ownGraphNodeKernelInit:690] kernel init for node 0, kernel com.ti.tidl:1:2 ... failed !!!
2338.565156 s: VX_ZONE_ERROR: [ TIDL subgraph boxes ] Node kernel init failed
2338.565163 s: VX_ZONE_ERROR: [ TIDL subgraph boxes ] Graph verify failed
TIDL_RT_OVX: ERROR: Verifying TIDL graph ... Failed !!!
TIDL_RT_OVX: ERROR: Verify OpenVX graph failed
************ TIDL_subgraphRtCreate done ************
Input Names===================: ['inputNet_IN']
Input Shapes: [[1, 3, 512, 512]]
Loaded image with shape: (2056, 2464, 3)
******* In TIDL_subgraphRtInvoke ********
2338.607976 s: VX_ZONE_ERROR: [ownContextSendCmd:912] Command ack message returned failure cmd_status: -1
2338.608009 s: VX_ZONE_ERROR: [ownNodeKernelInit:604] Target kernel, TIVX_CMD_NODE_CREATE failed for node TIDLNode
2338.608023 s: VX_ZONE_ERROR: [ownNodeKernelInit:605] Please be sure the target callbacks have been registered for this core
2338.608029 s: VX_ZONE_ERROR: [ownNodeKernelInit:606] If the target callbacks have been registered, please ensure no errors are occurring within the create callback of this kernel
2338.608038 s: VX_ZONE_ERROR: [ownGraphNodeKernelInit:690] kernel init for node 0, kernel com.ti.tidl:1:2 ... failed !!!
2338.608051 s: VX_ZONE_ERROR: [ TIDL subgraph boxes ] Node kernel init failed
2338.608057 s: VX_ZONE_ERROR: [ TIDL subgraph boxes ] Graph verify failed
2338.608111 s: VX_ZONE_ERROR: [ownGraphScheduleGraphWrapper:944] graph is not in a state required to be scheduled
2338.608118 s: VX_ZONE_ERROR: [vxProcessGraph:868] schedule graph failed
2338.608123 s: VX_ZONE_ERROR: [vxProcessGraph:873] wait graph failed
ERROR: Running TIDL graph ... Failed !!!
Sub Graph Stats 453.000000 8209.000000 10637105392006926.000000
******* TIDL_subgraphRtInvoke done ********
2025-01-14 11:18:08.294597912 [E:onnxruntime:, sequential_executor.cc:514 ExecuteKernel] Non-zero status code returned while running TIDL_0 node. Name:'TIDLExecutionProvider_TIDL_0_0' Status Message: TIDL Compute Invoke Failed.
Traceback (most recent call last):
File "/root/onnx/infer_2.py", line 75, in <module>
outputs = sess.run(None, inputs)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 217, in run
return self._sess.run(output_names, input_feed, run_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running TIDL_0 node. Name:'TIDLExecutionProvider_TIDL_0_0' Status Message: TIDL Compute Invoke Failed.
************ in TIDL_subgraphRtDelete ************
APP: Deinit ... !!!
REMOTE_SERVICE: Deinit ... !!!
REMOTE_SERVICE: Deinit ... Done !!!
2338.649304 s: IPC: Deinit ... !!!
2338.650845 s: IPC: DeInit ... Done !!!
2338.650890 s: MEM: Deinit ... !!!
2338.650906 s: DDR_SHARED_MEM: Alloc's: 10 alloc's of 87813068 bytes
2338.650915 s: DDR_SHARED_MEM: Free's : 10 free's of 87813068 bytes
2338.650924 s: DDR_SHARED_MEM: Open's : 0 allocs of 0 bytes
2338.650936 s: MEM: Deinit ... Done !!!
APP: Deinit ... Done !!!
问题在于我的编译或我的编译中缺少的任何内容。 但是,按创建的工件,有在 EVM 与我创建的推理脚本工作.任何帮助将在这方面很好。