TDA4VE-Q1: Why is the gstremer pipeline successfully established but cannot be found during use

Part Number: TDA4VE-Q1


I use the appCodecInit to create pipeline, and there are the params:
in_width = 640, in_height = 512, in_format = NV12, in num_channels = 1, in_num_planes  =2, in_buffer_depth = 6.
out_width = 640, out_height = 512, out_format = NV12, out num_channels = 1, out_num_planes  =2, out_buffer_depth = 6.
and the cmd_string for the gst_parse_launch():
appsrc format=GST_FORMAT_TIME is-live=true do-timestamp=true block=false name=myAppSrc0 ! queue ! video/x-raw, width=(int)640, height=(int)512, framerate=(fraction)30/1, format=(string)NV12, interlace-mode=(string)progressive, colorimetry=(string)bt601 ! v4l2h264enc extra-controls="controls, frame_level_rate_control_enable=1, video_bitrate=10000000" ! h264parse ! mp4mux ! filesink location=output_video_0.mp4 
but in function appCodecSrcinit there is an error:
gst wrapper: could not find element <myAppSrc0> in the pipeline

  • Hello!

    We have received your case and will take some time. Thank you for your patience.

  • Thank you for the question. 

    Generally speaking, you will expect some performance hit when you increase the setting from 8bit to 16bit.

    My understanding is that you are using mixed-precision model, and the inference performance (speed) is not good enough for you, correct? 

    Could you show us the exact settings in your import config file? (Prefer your original file or copy-pasted from your original file, not just describing them in English.)

    Also, are you running the inference on the EVM? If so, could you send us the log-text/screenshot about your inference running results. Because, sometimes the inference performance with PC simulation is not accurate.