This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

求助:DM368的SDK中的encodedecode的demo代码中输入参数解析?:)谢谢



TI的兄弟好,

   在阅读static Void parseArgs(Int argc, Char *argv[], Args *argsp)函数时,看到入参解析函数,其中,其他的输入定义基本明白了。不过其中的“d”该如何理解呢?

其他短选项都有对应的长选项,而且在函数解析代码中有功能的解析。

谢谢!

const Char shortOptions[] = "y:r:b:v:dpI:kt:oh";

static Void parseArgs(Int argc, Char *argv[], Args *argsp)
{
const Char shortOptions[] = "y:r:b:v:dpI:kt:oh";
const struct option longOptions[] = {
{"display_standard", required_argument, NULL, 'y'},
{"resolution", required_argument, NULL, 'r'},
{"bitrate", required_argument, NULL, 'b'},
{"videocodec", required_argument, NULL, 'v'},
{"passthrough", no_argument, NULL, 'p'},
{"video_input", required_argument, NULL, 'I'},
{"keyboard", no_argument, NULL, 'k'},
{"time", required_argument, NULL, 't'},
{"osd", no_argument, NULL, 'o'},
{"help", no_argument, NULL, 'h'},
{0, 0, 0, 0}
};
Int index;
Int c;

for (;;) {
c = getopt_long(argc, argv, shortOptions, longOptions, &index);

if (c == -1) {
break;
}

switch (c) {
case 0:
break;

case 'y':
switch (atoi(optarg)) {
case 1:
argsp->videoStd = VideoStd_D1_NTSC;
argsp->videoStdString = "D1 NTSC";
break;
case 2:
argsp->videoStd = VideoStd_D1_PAL;
argsp->videoStdString = "D1 PAL";
break;
case 3:
argsp->videoStd = VideoStd_720P_60;
argsp->videoStdString = "720P 60Hz";
break;
case 7:
argsp->videoStd = VideoStd_480P;
argsp->videoStdString = "480P 60Hz";
break;
default:
fprintf(stderr, "Unsupported display resolution\n\n");
usage();
exit(EXIT_FAILURE);
}
break;

case 'I':
switch (atoi(optarg)) {
case 1:
argsp->videoInput = Capture_Input_COMPOSITE;
break;
case 2:
argsp->videoInput = Capture_Input_SVIDEO;
break;
case 3:
argsp->videoInput = Capture_Input_COMPONENT;
break;
case 4:
argsp->videoInput = Capture_Input_CAMERA;
break;
default:
fprintf(stderr, "Unknown video input\n");
usage();
exit(EXIT_FAILURE);
}
break;

case 'r':
{
if (sscanf(optarg, "%ldx%ld", &argsp->imageWidth,
&argsp->imageHeight) != 2) {
fprintf(stderr, "Invalid resolution supplied (%s)\n",
optarg);
usage();
exit(EXIT_FAILURE);
}
/* Sanity check resolution */
if (argsp->imageWidth < 2UL || argsp->imageHeight < 2UL ||
argsp->imageWidth > VideoStd_720P_WIDTH ||
argsp->imageHeight > VideoStd_720P_HEIGHT) {
fprintf(stderr, "Video resolution must be between %dx%d "
"and %dx%d\n", 2, 2, VideoStd_720P_WIDTH,
VideoStd_720P_HEIGHT);
exit(EXIT_FAILURE);
}
break;
}

case 'b':
argsp->videoBitRate = atoi(optarg);
argsp->videoBitRateString = optarg;
break;

case 'v':
if (strcmp(optarg, "h264") == 0) {
argsp->videoCodec = H264;
argsp->videoCodecString = optarg;
}
else if (strcmp(optarg, "mpeg4") == 0) {
argsp->videoCodec = MPEG4;
argsp->videoCodecString = optarg;
}
else if (strcmp(optarg, "mpeg2") == 0) {
argsp->videoCodec = MPEG2;
argsp->videoCodecString = optarg;
}
else {
usage();
exit(EXIT_FAILURE);
}
break;

case 'p':
argsp->passThrough = TRUE;
break;

case 'k':
argsp->keyboard = TRUE;
break;

case 't':
argsp->time = atoi(optarg);
break;

case 'o':
argsp->osd = TRUE;
break;

case 'h':
usage();
exit(EXIT_SUCCESS);

default:
usage();
exit(EXIT_FAILURE);
}
}
if ((argsp->videoStd == VideoStd_480P) && (argsp->imageWidth == 0)){
argsp->imageWidth = VideoStd_480P_WIDTH;
argsp->imageHeight = VideoStd_480P_HEIGHT;
}

/*
* If video input is not set, set it to the default based on display
* video standard.
*/
if (argsp->videoInput == Capture_Input_COUNT) {
switch (argsp->videoStd) {
case VideoStd_D1_NTSC:
case VideoStd_D1_PAL:
argsp->videoInput = Capture_Input_COMPOSITE;
break;
case VideoStd_720P_60:
case VideoStd_480P:
argsp->videoInput = Capture_Input_COMPONENT;
break;
default:
fprintf(stderr, "Unknown display standard\n");
usage();
exit(EXIT_FAILURE);
break;
}
}
}

  • 你好,

    在encodedecode.txt里面有如下描述:

           -d, --deinterlace
                 This option enables interlacing artifact removal on the captured
                 frames using the resizer peripheral before encoding the frames.

  • 哦,明白其含义了。

    那是否意味着,demo的代码不支持这个功能?

    :)

  • 你好,

        你用的DVSDK是哪个版本呢?

        我用的是ti-dvsdk_dm368-evm_4_02_00_06\dvsdk-demos_4_02_00_01

        从其encodedecode.txt中只看到如下信息:

    NAME
           encodedecode - encode and decode video
    
    SYNOPSIS
           encodedecode [options...]
    
    DESCRIPTION
     This demo uses the Codec Engine to encode data from the capture device into an intermediate buffer before the data is decoded to the display framebuffer.
    
           The DM365MM and CMEM kernel modules need to be inserted for this demo
           to run.  Use the script 'loadmodule-rc' in the DVSDK to make sure both
           kernel modules are loaded with adequate parameters.
    
    OPTIONS
           -y <1-7>, --display_standard <1-7>
                 Sets the resolution of the display. If the captured resolution
                 is larger than the display it will be center clamped, and if it
                 is smaller the image will be centered.
    
                        1       D1 @ 30 fps (NTSC) 
                        2       D1 @ 25 fps (PAL)
                        3       720P @ 60 fps      [Default]                   
                        7       480P @ 60 fps 
    
           -v <videocodec>, --videocodec <h264 or mpeg4 or mpeg2>
                 The video codec to be used for encode and decode
    
           -r <resolution>, --resolution <resolution>
                 The resolution of video to encode and decode in the format
                 'width'x'height'. Default is the resolution of the input video
                 standard detected.
    
           -b <bit rate>, --bitrate <bit rate>
                 This option sets the bit rate with which the video will be
                 encoded. Use a negative value for variable bit rate. Default is
                 variable bit rate.
    
           -p, --passthrough
                 Pass the video through from capture device to display device
                 without encoding or decoding the data.
    
           -I, --video_input
    		     Video input source to use.
                     1       Composite                                                       
                     2       S-video                                                         
                     3       Component                                                       
                     4       Imager/Camera - for DM368  
                 When not specified, the video input is chosen based on the display 
                 video standard selected. NTSC/PAL use Composite, and 480P/720P use 
                 Component.
    
           -k, --keyboard
                 Enables the keyboard input mode which lets the user input
                 commands using the keyboard in addition to the QT-based OSD
                 interface. At the prompt type 'help' for a list of available
                 commands.
    
           -t <seconds>, --time <seconds>
                 The number of seconds to run the demo. Defaults to infinite time.
    
           -o, --osd
                 Enables the On Screen Display for configuration and data 
                 visualization using a QT-based UI. If this option is not passed, 
                 the data will be output to stdout instead.
    
           -h, --help
                 This will print the usage of the demo.

         疑问1:在其中没有找到“d”选项。

        疑问2:在这个encodedecode的实例中,是否包含了framebuffer的使用?从我看了一些资料的了解,说是显示的时候应该用到framebuffer,我们这个实例是这样么?如标颜色那句话,其含义到底是使用了fb?还是没使用呢?

        或者是调用关系在更底层?我还没有看到?

        不太理解。

        非常感谢。:)

  • 你好,

    不好意思,我看的encodedecode.txt是一个比较老的DVSDK版本的。

    我记得DM36x的demo如果是隔行输入,例如标清CVBS输入,驱动会丢一场,然后垂直放大一场到一帧,这是在驱动里面写好的。

  • 你好,

    疑问2:在这个encodedecode的实例中,是否包含了framebuffer的使用?从我看了一些资料的了解,说是显示的时候应该用到framebuffer,我们这个实例是这样么?如标颜色那句话,其含义到底是使用了fb?还是没使用呢?

    【Chris】下面是在LSP 2.10 DaVinci Linux Drivers (Rev. A)里面找到的内容。你所说framebuffer,指的是FBDEV么?

    · OSD0 and OSD1 windows are controlled only by FBDEV interface
    · VID0 and VID1 windows can be controlled by both V4L2 and FBDEV interface

    http://processors.wiki.ti.com/index.php/DaVinci_PSP_03.01_Linux_Installation_User_Guide#Video_drivers

    The video display drivers and some other video related components in PSP 03.01 are an up-port of those present in LSP 2.10. For such components, the usage documentation provided with LSP 2.10 will apply to PSP 3.01 as well.

  • 从《DAVINCI技术剖析及实战应用开发指南》的7.3.2中有如下描述:

    framebuffer帧缓存提供各种方法,都是编译到内核中的。帧缓存是linux为显示设备提供的一个接口,是将显存抽象后的一种设备,它允许上层应用程序在图像模式下直接对显示缓存区进行读写操作。这种操作是抽象的,统一的。

    在开发者来看,framebuffer是一块显示缓存,向显示缓存中写入特定格式的数据就意味着向屏幕输出。

    用户不停地向framebuffer中写入数据,显示控制器会自动地从framebuffer中取数据并显示出来。全部的图形都存储在共享内存的同一个帧缓存中。显卡会不停从帧缓存framebuffer中获取数据进行显示。

  • 从encodedecode的demo代码中有如下:

    /**
    * @brief Display standards supported on Linux (v4l2 and fbdev).
    */
    typedef enum {
    /** @brief v4l2 video standard */
    Display_Std_V4L2 = 0,

    /** @brief Fbdev video standard */
    Display_Std_FBDEV,
    Display_Std_COUNT
    } Display_Std;

    还有:

    /** @brief Name of fbdev or v4l2 display device to use.
    * @remarks Only applicable on Linux.
    */
    Char *displayDevice;

    说明DVSDK是支持两种视频显示,一种是fb,也就是framebuffer,一种是V4L2,

    1.疑问:不过这两种的区别是什么呢?

    2.疑问:DVSDK默认支持是V4L2?为什么?

  • 你好,

    FBDEV/V4L2都是Linux的标准接口。FBDEV只用于显示,而V4L2既支持显示也支持采集。对于控制video window,本质上使用FBDEV和V4L2没有太大不同。

    · OSD0 and OSD1 windows are controlled only by FBDEV interface
    · VID0 and VID1 windows can be controlled by both V4L2 and FBDEV interface