Part Number: J784S4XEVM
We have a camera input stream that outputs YUV422 image buffers in our video pipeline.
-
Is there an existing TI SDK node or hardware-accelerated component to convert YUV422 to YUV420?
-
We have a GPU-based render node. What is the recommended approach to share YUV buffers using zero-copy(no CPU memcpy) between the capture/convert node and the GPU render node?
-
Additionally, if we want to run a computer vision algorithm in parallel using another node ,how can we design the pipeline so that both the render node and the CV node run in parallel, preferably still using zero-copy buffer sharing?
Please advise on supported nodes, buffer formats, and recommended pipeline architecture for this use case.