This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

[参考译文] CODECOMPOSER:量化感知培训(QAT)

Guru**** 1807890 points
请注意,本文内容源自机器翻译,可能存在语法或其它翻译错误,仅供参考。如需获取准确内容,请参阅链接中的英语原文或自行翻译。

https://e2e.ti.com/support/tools/code-composer-studio-group/ccs/f/code-composer-studio-forum/1395086/codecomposer-quantization-aware-training-qat

器件型号:CODECOMPOSER

工具与软件:

您好,我已经完成了 链接的例程,并使用语义分割模型:fpn_aspp_regnetx1p6gf_edgeailite 训练 cityspace 数据集并获得 model_best.pth 模型。
​但是、在量化感知培训(QAT)中、我已根据 QAT.MD 教程执行了以下配置:

is_cuda = next(model.parameters()).is_cuda
example_inputs = create_rand_inputs(args, is_cuda=is_cuda)
if 'training' in args.phase:
    model = edgeai_torchmodelopt.xmodelopt.quantization.v2.QATFxModule(model, example_inputs=example_inputs, total_epochs=args.epochs)
elif 'calibration' in args.phase:
    model = edgeai_torchmodelopt.xmodelopt.quantization.v2.QATFxModule(model)
elif 'validation' in args.phase:
    # Note: bias_calibration is not emabled
    model = edgeai_torchmodelopt.xmodelopt.quantization.v2.QATFxModule(model, total_epochs=args.epochs)

但是、我遇到了以下错误:

Traceback (most recent call last):
  File "/home/xcb/zjb/algorithm/edgeai-torchvision/./references/edgeailite/main/pixel2pixel/train_segmentation_main.py", line 290, in <module>
    run(args)
  File "/home/xcb/zjb/algorithm/edgeai-torchvision/./references/edgeailite/main/pixel2pixel/train_segmentation_main.py", line 285, in run
    main(arguments)
  File "/home/xcb/zjb/algorithm/edgeai-torchvision/./references/edgeailite/main/pixel2pixel/train_segmentation_main.py", line 148, in main
    train_pixel2pixel.main(arguemnts)
  File "/home/xcb/zjb/algorithm/edgeai-torchvision/references/edgeailite/edgeai_xvision/xengine/train_pixel2pixel.py", line 450, in main
    model = edgeai_torchmodelopt.xmodelopt.quantization.v2.QATFxModule(model, example_inputs=example_inputs, total_epochs=args.epochs)
  File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/edgeai_torchmodelopt/xmodelopt/quantization/v2/quant_fx.py", line 37, in __init__
    super().__init__(*args, is_qat=True, backend=backend, **kwargs)
  File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/edgeai_torchmodelopt/xmodelopt/quantization/v2/quant_fx_base.py", line 79, in __init__
    model = quantize_fx.prepare_qat_fx(model, qconfig_mapping, example_inputs)
  File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py", line 515, in prepare_qat_fx
    return _prepare_fx(
  File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py", line 162, in _prepare_fx
    graph_module = GraphModule(model, tracer.trace(model))
  File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 739, in trace
    (self.create_arg(fn(*args)),),
  File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/edgeai_torchmodelopt/xnn/utils/amp.py", line 45, in conditional_fp16
    return func(self, *args, **kwargs)
  File "/home/xcb/zjb/algorithm/edgeai-torchvision/references/edgeailite/edgeai_xvision/xvision/models/pixel2pixel/pixel2pixelnet.py", line 122, in forward
    d_out = decoder(x_inp, x_feat, x_list)
  File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 717, in module_call_wrapper
    return self.call_module(mod, forward, args, kwargs)
  File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/torch/ao/quantization/fx/tracer.py", line 103, in call_module
    return super().call_module(m, forward, args, kwargs)
  File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 434, in call_module
    return forward(*args, **kwargs)
  File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 710, in forward
    return _orig_module_call(mod, *args, **kwargs)
  File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/xcb/zjb/algorithm/edgeai-torchvision/references/edgeailite/edgeai_xvision/xvision/models/pixel2pixel/fpn_edgeailite.py", line 240, in forward
    assert isinstance(x_input, (list,tuple)) and len(x_input)<=2, 'incorrect input'
AssertionError: incorrect input

我已经完成了教程、并且删除了 example_inputs=example_inputs、但仍然存在上述错误、应该添加一些其他配置吗?

期待您的回复!谢谢!