This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM62A7: Model quantization problem

Part Number: AM62A7
TIDL version 09.02.06
Using int16 quantization model, the effect is acceptable, but using int8 quantization, the effect is very poor. How to locate the difference caused by which layer? 
Found a problem: need to remove the mul, sub, and mul in Figure 1 (marked with yellow boxes), int16 quantization is normal, and the modified model output is shown in Figure 2