This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

[参考译文] AM69A:在图形中检测到部分批处理时、不支持高吞吐量推理模式

Guru**** 2463330 points


请注意,本文内容源自机器翻译,可能存在语法或其它翻译错误,仅供参考。如需获取准确内容,请参阅链接中的英语原文或自行翻译。

https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1470647/am69a-high-throughtput-inference-mode-is-not-supported-when-partial-batch-is-detected-in-graph

器件型号:AM69A

工具与软件:

您好!

我希望 使用以下选项编译 Conv2D 模型:

compile_options = {
    'tidl_tools_path' : os.environ['TIDL_TOOLS_PATH'],
    'artifacts_folder' : output_dir,
    'tensor_bits' : 8,
    'accuracy_level' : 1,
    'advanced_options:calibration_frames' : len(calib_images),
    'advanced_options:calibration_iterations' : 16,
    'advanced_options:inference_mode' : 1,
    'advanced_options:num_cores' : 4,
    'core_start_idx' : 1,
}

但是、我的内核崩溃(请参阅错误)。

The Kernel crashed while executing code in the current cell or a previous cell. 
Please review the code in the cell(s) to identify a possible cause of the failure. 
Click here for more info. 
View Jupyter log for further details. 

通过删除某些选项('advanced_options:num_cores' : 4,'core_start_idx': 1,),内核不再崩溃,但我得到这个其他错误:  

 ========================= [Model Compilation Started] =========================

Model compilation will perform the following stages:
1. Parsing
2. Graph Optimization
3. Quantization & Calibration
4. Memory Planning

============================== [Version Summary] ==============================

-------------------------------------------------------------------------------
|          TIDL Tools Version          |              10_00_08_00             |
-------------------------------------------------------------------------------
|         C7x Firmware Version         |              10_00_02_00             |
-------------------------------------------------------------------------------

============================== [Parsing Started] ==============================

Number of OD backbone nodes = 86 
Size of odBackboneNodeIds = 86 

Total Nodes = 104
-------------------------------------------------------------------------------
|          Core           |      No. of Nodes       |   Number of Subgraphs   |
-------------------------------------------------------------------------------
...
=================== [Optimization for subgraph_264 started] ===================

[TIDL Import]  ERROR: High Throughtput Inference Mode is not supported when partial batch is detected in graph -- [tidl_import_core.cpp, 2960]
[TIDL Import]  ERROR: Network Optimization failed - Failed in function: TIDL_runtimesOptimizeNet -- [tidl_runtimes_import_common.cpp, 1268]
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
  0%|          | 0/4 [00:00<?, ?it/s]

 Number of subgraphs:1 , 104 nodes delegated out of 104 nodes 

您能解释一下这个错误意味着什么以及我能做些什么来解决它吗? 我已经尝试了 inference_mode 0和2、但结果不相关。

非常感谢、

Azer

  • 请注意,本文内容源自机器翻译,可能存在语法或其它翻译错误,仅供参考。如需获取准确内容,请参阅链接中的英语原文或自行翻译。

    尊敬的 Azer:

    感谢您的耐心。 我建议尝试使用最新版本的 TIDL 工具(v.10_01_04_00)、该版本应该具有解决此问题的补丁。  

    此致、
    Christina