TMS320C6678: 核间通信与核内通信问题

Part Number: TMS320C6678

0. 写在前面

在使用 C6678 进行多核编程时,在通信方面有一些疑惑,希望能得到各位工程师的解答。

1. 多核之间通信交互

这里以两个核(core0 和 core1)为例,它们使用 ListMP 传输数据。

core0 和 core1 的 cfg 节选如下:
var SHAREDMEM = 0x0C000000;
var SHAREDMEMSIZE = 0x40000;
var SDDR3_BASE = 0x80000000;
var SDDR3_SIZE = 0x50000;
/*
 *  Need to define the shared region. The IPC modules use this
 *  to make portable pointers. All processors need to add this
 *  call with their base address of the shared memory region.
 *  If the processor cannot access the memory, do not add it.
 */

var SharedRegion = xdc.useModule('ti.sdo.ipc.SharedRegion');
SharedRegion.numEntries = 2;
SharedRegion.translate = false;
SharedRegion.setEntryMeta(0,
    { base: SHAREDMEM,
      len: SHAREDMEMSIZE,
      ownerProcId: 0,
      isValid: true,
      cacheEnable: true,
      cacheLineSize: 64,
      createHeap: true,
      name: "MSMC_SHARED",
     });
SharedRegion.setEntryMeta(1,
    { base: SDDR3_BASE,
      len: SDDR3_SIZE,
      ownerProcId: 0,
      isValid: true,
      cacheEnable: true,
      cacheLineSize: 128,
      createHeap: true,
      name: "DDR3_SHARED",
     }); 


1.1 情况 1

假定要传输的数据结构体定义如下:
// IPC_Struct
typedef struct tag_imgIRLmp
{
    ListMP_Elem header;
    ImgIR ir;       // 红外图像结构体
    Int InitFlag;  // 0:代表跟踪状态,1:代表初始化帧 
} ImgIRLMP;

typedef struct tag_ImgIR
{
    UInt8 *pImg;
    int width;
    int height;
} ImgIR;


core0 部分代码如下,ListMP 句柄为 lmpIR_handle,下同:
/*
CORE 0
*/

ImgIRLMP *pImgIRLMP;
// Allocation
pImgIRLMP = (ImgIRLMP *)Memory_alloc(0, sizeof(ImgIRLMP), L1_CACHELINE_SIZE, &eb);
// Fill Data
pImgIRLMP->InitFlag = 10086;
// ...省略其它赋值

// Put Elem
ListMP_putHead(lmpIR_handle, (ListMP_Elem *)pImgIRLMP);

core1 部分代码如下:
/*
CORE 1
*/

ImgIRLMP *pImgIRLMP;
while((pImgIRLMP = (ImgIRLMP *)ListMP_getTail(lmpIR_handle)) == NULL);

问题:
1.`pImgIRLMP` 能否在 `heap0` 上申请内存空间?
2.是否应该为 `ImgIR` 的 `pImg` 申请内存空间?
3.是否应该释放 `pImgIRLMP` (等其它可能的变量)申请的空间?如果应该释放,在何时释放?如何释放?在 core0 处释放还是在 core1 处释放?
4. 是否有必要在装入链表前进行 `Cache_wb`?读取链表后进行 `Cache_inv`?如果是的话,只对 `pImgIRLMP` 进行相关操作吗?还是 `ImgIR` 的 `pImg` 也要进行?


1.2 情况 2

假定要传输的数据结构体如下:
// IPC_Struct
typedef struct tag_imgIRLmp
{
    ListMP_Elem header;
    ImgIR *pir;  // 这里是指针,和情况 1 不同
    Int InitFlag;   
} ImgIRLMP;

typedef struct tag_ImgIR
{
    UInt8 *pImg;
    int width;
    int height;
} ImgIR;

core0 和 core1 的代码同情况 1。

问题:
1.`pImgIRLMP` 能否在 `heap0` 上申请内存空间?
2.是否应该为 `ImgIRLMP` 的 `pir` 申请内存空间,是否应该为 `ImgIR` 的 `pImg` 申请内存空间?
3.是否应该释放 `pImgIRLMP` (等其它可能的变量)申请的空间?如果应该释放,在何时释放?如何释放?在 core0 处释放还是在 core1 处释放?
4. 是否有必要在装入链表前进行 `Cache_wb`?读取链表后进行 `Cache_inv`?如果是的话,只对 `pImgIRLMP` 进行相关操作吗?还是 `ImgIRLMP` 的 `pir`  和 `ImgIR` 的 `pImg` 也要进行?


2. 单核的核内交互

这里以 core0 为例,在不同的 cpp 文件中采用 Queue 进行交互。

core0 的 cfg 节选如下:
var SMSMC_BASE = 0x0C000000;
var SMSMC_SIZE = 0x00040000;
var SDDR3_BASE = 0x80000000;
var SDDR3_SIZE = 0x40000000;

var SharedRegion = xdc.useModule('ti.sdo.ipc.SharedRegion');
SharedRegion.numEntries = 2;
SharedRegion.translate = false;
SharedRegion.setEntryMeta(0,
    { base: SHAREDMEM,
      len: SHAREDMEMSIZE,
      ownerProcId: 0,
      isValid: true,
      cacheEnable: true,
      cacheLineSize: 64,
      createHeap: true,
      name: "MSMC_SHARED",
     });
SharedRegion.setEntryMeta(1,
    { base: SDDR3_BASE,
      len: SDDR3_SIZE,
      ownerProcId: 0,
      isValid: true,
      cacheEnable: true,
      cacheLineSize: 128,
      createHeap: true,
      name: "DDR3_SHARED",
     }); 

假定要传输的数据结构体如下:
typedef struct tag_DataPtQueElem
{
    Queue_Elem header;
    void *pIr;
    void *pImg2;
    void *pImg2;
    int frame_id;
} DataPtQueElem;

core0 在 send.cpp 发送数据, Queue 的句柄为 hDataPtQue,下同:
/*
send.cpp
*/

// global variables
#pragma DATA_SECTION(hw_image_data,".image");
uint8_t hw_image_data[HW_IMAGE_DATA_SIZE];
#pragma DATA_SECTION(my_image2, ".image")
uint8_t my_image2[MY_IMAGE_DATA_SIZE2]; 
#pragma DATA_SECTION(my_image3, ".image")
uint8_t my_image3[MY_IMAGE_DATA_SIZE3];

void send_func()
{
    DataPtQueElem *pElem;
    pElem = (DataPtQueElem*)Memory_alloc(0, sizeof(DataPtQueElem), L1_CACHELINE_SIZE, NULL);
    pElem->pIr = hw_image_data;
    pElem->pImg2 = my_image2;
    pElem->pImg3 = my_image3;
    pElem->frame_id = 10086;
    Queue_enqueue(hDataPtQue, (Queue_Elem *)pElem);
}

core0 在 recv.cpp 接收数据:
 
/*
recv.cpp
*/
DataPtQueElem *pElem;
pElem = (DataPtQueElem *)Queue_dequeue(hDataPtQue);

问题:
1.`pElem` 能否在 `heap0` 上申请内存空间?
2.是否应该为 `DataPtQueElem` 的 `pIr` 、`pImg2``pImg3`申请内存空间?
3.是否应该释放 `pElem` (等其它可能的变量)申请的空间?如果应该释放,在何时释放?如何释放?在 send.cpp 处释放还是在 recv.cpp 处释放?
4.是否有必要在装入队列前进行 `Cache_wb`?读取队列后进行 `Cache_inv`?如果是的话,只对 `pElem` 进行相关操作吗?还是 `DataPtQueElem` 的 `pIr`、 `pImg2` 和 `pImg3` 也要进行?
  • 已经收到了您的案例,调查需要些时间,感谢您的耐心等待。

  • 1.`pImgIRLMP` 能否在 `heap0` 上申请内存空间?

    能否分享您正在使用的链接程序脚本? 此外,您能否确认heap0是指core0堆内存还是指共享内存空间?

  • 好的,这是core0的linker.cmd:

    //// core0 
    /*
     * Do not modify this file; it is automatically generated from the template
     * linkcmd.xdt in the ti.targets.elf package and will be overwritten.
     */
    
    /*
     * put '"'s around paths because, without this, the linker
     * considers '-' as minus operator, not a file name character.
     */
    
    
    -l"D:\CCS_Project\Lab_test\IPC_core0\Debug\configPkg\package\cfg\IPC_core0_pe66.oe66"
    -l"E:\Tool\ccsv\Packages\ipc_1_25_03_15\packages\ti\sdo\ipc\lib\ipc\instrumented\ipc.ae66"
    -l"E:\Tool\ccsv\Packages\bios_6_34_04_22\packages\ti\sysbios\lib\sysbios\instrumented\sysbios.ae66"
    -l"E:\Tool\ccsv\Packages\xdctools_3_25_06_96\packages\ti\targets\rts6000\lib\ti.targets.rts6000.ae66"
    -l"E:\Tool\ccsv\Packages\xdctools_3_25_06_96\packages\ti\targets\rts6000\lib\boot.ae66"
    
    --retain="*(xdc.meta)"
    
    
    --args 0x0
    -heap  0x0
    -stack 0x1000
    
    MEMORY
    {
        MSMC_SHARED (RWX) : org = 0xc000000, len = 0x40000
        L2SRAM (RWX) : org = 0x800000, len = 0x70000
        DDR3_SHARED (RWX) : org = 0x80000000, len = 0x40000000
        MSMC (RWX) : org = 0xc040000, len = 0x40000
        DDR3 (RWX) : org = 0xc0000000, len = 0x8000000
    }
    
    /*
     * Linker command file contributions from all loaded packages:
     */
    
    /* Content from xdc.services.global (null): */
    
    /* Content from xdc (null): */
    
    /* Content from xdc.corevers (null): */
    
    /* Content from xdc.shelf (null): */
    
    /* Content from xdc.services.spec (null): */
    
    /* Content from xdc.services.intern.xsr (null): */
    
    /* Content from xdc.services.intern.gen (null): */
    
    /* Content from xdc.services.intern.cmd (null): */
    
    /* Content from xdc.bld (null): */
    
    /* Content from ti.targets (null): */
    
    /* Content from ti.targets.elf (null): */
    
    /* Content from xdc.rov (null): */
    
    /* Content from xdc.runtime (null): */
    
    /* Content from ti.targets.rts6000 (null): */
    
    /* Content from ti.sysbios.interfaces (null): */
    
    /* Content from ti.sysbios.family (null): */
    
    /* Content from ti.sysbios.hal (null): */
    
    /* Content from xdc.runtime.knl (null): */
    
    /* Content from ti.sdo.ipc.family (null): */
    
    /* Content from ti.sdo.ipc.interfaces (null): */
    
    /* Content from ti.sysbios (null): */
    
    /* Content from ti.sysbios.knl (null): */
    
    /* Content from ti.sysbios.gates (null): */
    
    /* Content from ti.sdo.utils (null): */
    
    /* Content from ti.sysbios.syncs (null): */
    
    /* Content from xdc.services.getset (null): */
    
    /* Content from ti.sysbios.xdcruntime (null): */
    
    /* Content from ti.sysbios.family.c66 (ti/sysbios/family/c66/linkcmd.xdt): */
    
    /* Content from ti.sysbios.family.c64p (null): */
    
    /* Content from ti.sysbios.family.c62 (null): */
    
    /* Content from ti.sysbios.timers.timer64 (null): */
    
    /* Content from ti.sysbios.family.c64p.tci6488 (null): */
    
    /* Content from ti.sysbios.heaps (null): */
    
    /* Content from ti.sysbios.utils (null): */
    
    /* Content from ti.catalog.c6000 (null): */
    
    /* Content from ti.catalog (null): */
    
    /* Content from ti.catalog.peripherals.hdvicp2 (null): */
    
    /* Content from xdc.platform (null): */
    
    /* Content from xdc.cfg (null): */
    
    /* Content from ti.platforms.generic (null): */
    
    /* Content from m6678_core0 (null): */
    
    /* Content from ti.sdo.ipc.heaps (null): */
    
    /* Content from ti.sdo.ipc (ti/sdo/ipc/linkcmd.xdt): */
    
    SECTIONS
    {
        ti.sdo.ipc.SharedRegion_0:  { . += 0x40000;} run > 0xc000000, type = NOLOAD
        ti.sdo.ipc.SharedRegion_1:  { . += 0x50000;} run > 0x80000000, type = NOLOAD
    }
    
    
    /* Content from ti.sdo.ipc.family.c647x (null): */
    
    /* Content from ti.sdo.ipc.notifyDrivers (null): */
    
    /* Content from ti.sdo.ipc.transports (null): */
    
    /* Content from ti.sdo.ipc.nsremote (null): */
    
    /* Content from ti.sdo.ipc.gates (null): */
    
    /* Content from configPkg (null): */
    
    
    /*
     * symbolic aliases for static instance objects
     */
    xdc_runtime_Startup__EXECFXN__C = 1;
    xdc_runtime_Startup__RESETFXN__C = 1;
    TSK_idle = ti_sysbios_knl_Task_Object__table__V + 76;
    
    SECTIONS
    {
        .text: load >> L2SRAM
        .ti.decompress: load > L2SRAM
        .stack: load > L2SRAM
        GROUP: load > MSMC
        {
            .bss:
            .neardata:
            .rodata:
        }
        .cinit: load > MSMC
        .pinit: load >> MSMC
        .init_array: load > MSMC
        .const: load >> MSMC
        .data: load >> MSMC
        .fardata: load >> MSMC
        .switch: load >> MSMC
        .sysmem: load > MSMC
        .far: load >> MSMC
        .args: load > MSMC align = 0x4, fill = 0 {_argsize = 0x0; }
        .cio: load >> MSMC
        .ti.handler_table: load > MSMC
        .c6xabi.exidx: load > MSMC
        .c6xabi.extab: load >> MSMC
        .vecs: load > L2SRAM
        xdc.meta: load > MSMC, type = COPY
    
    }
    

    这是core1的link.cmd:

    /*
     * Do not modify this file; it is automatically generated from the template
     * linkcmd.xdt in the ti.targets.elf package and will be overwritten.
     */
    
    /*
     * put '"'s around paths because, without this, the linker
     * considers '-' as minus operator, not a file name character.
     */
    
    
    -l"D:\CCS_Project\Lab_test\IPC_core1\Debug\configPkg\package\cfg\IPC_core1_pe66.oe66"
    -l"E:\Tool\ccsv\Packages\ipc_1_25_03_15\packages\ti\sdo\ipc\lib\ipc\instrumented\ipc.ae66"
    -l"E:\Tool\ccsv\Packages\bios_6_34_04_22\packages\ti\sysbios\lib\sysbios\instrumented\sysbios.ae66"
    -l"E:\Tool\ccsv\Packages\xdctools_3_25_06_96\packages\ti\targets\rts6000\lib\ti.targets.rts6000.ae66"
    -l"E:\Tool\ccsv\Packages\xdctools_3_25_06_96\packages\ti\targets\rts6000\lib\boot.ae66"
    
    --retain="*(xdc.meta)"
    
    
    --args 0x0
    -heap  0x0
    -stack 0x1000
    
    MEMORY
    {
        MSMC_SHARED (RWX) : org = 0xc000000, len = 0x40000
        DDR3_SHARED (RWX) : org = 0x80000000, len = 0x40000000
        MSMC (RWX) : org = 0xc080000, len = 0x40000
        DDR3 (RWX) : org = 0xc8000000, len = 0x8000000
        L2SRAM (RWX) : org = 0x800000, len = 0x60000
    }
    
    /*
     * Linker command file contributions from all loaded packages:
     */
    
    /* Content from xdc.services.global (null): */
    
    /* Content from xdc (null): */
    
    /* Content from xdc.corevers (null): */
    
    /* Content from xdc.shelf (null): */
    
    /* Content from xdc.services.spec (null): */
    
    /* Content from xdc.services.intern.xsr (null): */
    
    /* Content from xdc.services.intern.gen (null): */
    
    /* Content from xdc.services.intern.cmd (null): */
    
    /* Content from xdc.bld (null): */
    
    /* Content from ti.targets (null): */
    
    /* Content from ti.targets.elf (null): */
    
    /* Content from xdc.rov (null): */
    
    /* Content from xdc.runtime (null): */
    
    /* Content from ti.targets.rts6000 (null): */
    
    /* Content from ti.sysbios.interfaces (null): */
    
    /* Content from ti.sysbios.family (null): */
    
    /* Content from ti.sysbios.hal (null): */
    
    /* Content from xdc.runtime.knl (null): */
    
    /* Content from ti.sdo.ipc.family (null): */
    
    /* Content from ti.sdo.ipc.interfaces (null): */
    
    /* Content from ti.sysbios (null): */
    
    /* Content from ti.sysbios.knl (null): */
    
    /* Content from ti.sysbios.gates (null): */
    
    /* Content from ti.sdo.utils (null): */
    
    /* Content from ti.sysbios.syncs (null): */
    
    /* Content from xdc.services.getset (null): */
    
    /* Content from ti.sysbios.xdcruntime (null): */
    
    /* Content from ti.sysbios.family.c66 (ti/sysbios/family/c66/linkcmd.xdt): */
    
    /* Content from ti.sysbios.family.c64p (null): */
    
    /* Content from ti.sysbios.family.c62 (null): */
    
    /* Content from ti.sysbios.timers.timer64 (null): */
    
    /* Content from ti.sysbios.family.c64p.tci6488 (null): */
    
    /* Content from ti.sysbios.heaps (null): */
    
    /* Content from ti.sysbios.utils (null): */
    
    /* Content from ti.catalog.c6000 (null): */
    
    /* Content from ti.catalog (null): */
    
    /* Content from ti.catalog.peripherals.hdvicp2 (null): */
    
    /* Content from xdc.platform (null): */
    
    /* Content from xdc.cfg (null): */
    
    /* Content from ti.platforms.generic (null): */
    
    /* Content from m6678_core1 (null): */
    
    /* Content from ti.sdo.ipc.heaps (null): */
    
    /* Content from ti.sdo.ipc (ti/sdo/ipc/linkcmd.xdt): */
    
    SECTIONS
    {
        ti.sdo.ipc.SharedRegion_0:  { . += 0x40000;} run > 0xc000000, type = NOLOAD
        ti.sdo.ipc.SharedRegion_1:  { . += 0x50000;} run > 0x80000000, type = NOLOAD
    }
    
    
    /* Content from ti.sdo.ipc.family.c647x (null): */
    
    /* Content from ti.sdo.ipc.notifyDrivers (null): */
    
    /* Content from ti.sdo.ipc.transports (null): */
    
    /* Content from ti.sdo.ipc.nsremote (null): */
    
    /* Content from ti.sdo.ipc.gates (null): */
    
    /* Content from configPkg (null): */
    
    
    /*
     * symbolic aliases for static instance objects
     */
    xdc_runtime_Startup__EXECFXN__C = 1;
    xdc_runtime_Startup__RESETFXN__C = 1;
    TSK_idle = ti_sysbios_knl_Task_Object__table__V + 76;
    
    SECTIONS
    {
        .text: load >> L2SRAM
        .ti.decompress: load > L2SRAM
        .stack: load > L2SRAM
        GROUP: load > MSMC
        {
            .bss:
            .neardata:
            .rodata:
        }
        .cinit: load > MSMC
        .pinit: load >> MSMC
        .init_array: load > MSMC
        .const: load >> MSMC
        .data: load >> MSMC
        .fardata: load >> MSMC
        .switch: load >> MSMC
        .sysmem: load > MSMC
        .far: load >> MSMC
        .args: load > MSMC align = 0x4, fill = 0 {_argsize = 0x0; }
        .cio: load >> MSMC
        .ti.handler_table: load > MSMC
        .c6xabi.exidx: load > MSMC
        .c6xabi.extab: load >> MSMC
        .vecs: load > L2SRAM
        xdc.meta: load > MSMC, type = COPY
    
    }
    


    我所提到的heap0指的是core0核自身的堆,而不是共享堆。heap0被分配到了MSMC上,而MSMC_SHARED被.cfg文件显式地声明为SharedRegion 0。

    core0的map文件节选:

             name            origin    length      used     unused   attr    fill
    ----------------------  --------  ---------  --------  --------  ----  --------
      L2SRAM                00800000   00070000  00025d20  0004a2e0  RW X
      MSMC_SHARED           0c000000   00040000  00040000  00000000  RW X
      MSMC                  0c040000   00040000  00017630  000289d0  RW X
      DDR3_SHARED           80000000   40000000  00050000  3ffb0000  RW X
      DDR3                  c0000000   08000000  00000000  08000000  RW X

    core1的map文件节选:

             name            origin    length      used     unused   attr    fill
    ----------------------  --------  ---------  --------  --------  ----  --------
      L2SRAM                00800000   00060000  000268e0  00039720  RW X
      MSMC_SHARED           0c000000   00040000  00040000  00000000  RW X
      MSMC                  0c080000   00040000  000175d4  00028a2c  RW X
      DDR3_SHARED           80000000   40000000  00050000  3ffb0000  RW X
      DDR3                  c8000000   08000000  00000000  08000000  RW X


    我还作了一个有趣的实验:

    结构体仍采用:

    typedef struct tag_ImgIR
    {
        UInt8 *img;
        int width;
        int height;
    } ImgIR;
    
    typedef struct tag_imgIRLmp
    {
    	ListMP_Elem header;
        ImgIR *pir;			
        Int InitFlag;		
    } ImgIRLMP;

    core0在共享堆和本地堆上分别去尝试装载链表发送数据:

    #if USE_SHARED_REGION
    	pImgIR = (ImgIRLMP *)Memory_alloc((IHeap_Handle)SharedRegion_getHeap(0), sizeof(ImgIRLMP), L1_CACHELINE_SIZE, &eb);
    	pImgIR->pir = (ImgIR *)Memory_alloc((IHeap_Handle)SharedRegion_getHeap(0), sizeof(ImgIR), L1_CACHELINE_SIZE, &eb);
    #else
    	pImgIR = (ImgIRLMP *)Memory_alloc(0, sizeof(ImgIRLMP), L1_CACHELINE_SIZE, &eb);
    	// xxx: 若在core自己的heap0分配,为 pImgIR->pir 分配空间就会报错,否则不报错;
    	pImgIR->pir = (ImgIR *)Memory_alloc(0, sizeof(ImgIR), L1_CACHELINE_SIZE, &eb);
    #endif
    
        // Fill Data
    	pImgIR->InitFlag = 10086;
    	pImgIR->Roi.x      = 1;
    	pImgIR->Roi.y      = 2;
    	pImgIR->Roi.width  = 3;
    	pImgIR->Roi.height = 4;
    	pImgIR->pir->height = 1;
    	pImgIR->pir->width = 2;
    	pImgIR->pir->img = NULL;
    
    	Cache_wb(pImgIR->pir, sizeof(ImgIR), Cache_Type_ALLD, TRUE);
    	Cache_wb(pImgIR, sizeof(ImgIRLMP), Cache_Type_ALLD, TRUE);
    
    
    	Int putRes = ListMP_putHead(lmpIR_handle, (ListMP_Elem *)pImgIR);
    	if(putRes == ListMP_S_SUCCESS) {
    		// 放入链表
    		System_printf("Put header Success\n");
    	}
    	else {
    		System_printf("Failed!!!\n");
    	}

    core1取数据:

    while((pImgIR = (ImgIRLMP *)ListMP_getTail(lmpIR_handle)) == NULL);
    
    Cache_inv(pImgIR, sizeof(ImgIRLMP), Cache_Type_ALLD, TRUE);
    Cache_inv(pImgIR->pir, sizeof(ImgIR), Cache_Type_ALLD, TRUE);

    在共享堆上,数据能够顺利地装载入链表,同时能成功被core1取出。

    但在core0本地堆上,

    如果只为pImgIR分配空间,那么就可以顺利将pImgIR放入链表,同时core1也能成功从链表中取出。

    如果还为pImgIR->pir分配空间,尽管core0能成功将pImgIR放入链表,但core1在试图取出链表元素时会报错:

    [TMS320C66x_1] ti.sdo.ipc.ListMP: line 424: assertion failure: A_nullPointer: Pointer is null
    [TMS320C66x_1] xdc.runtime.Error.raise: terminating execution

    请问这是什么原因呢?为什么单单为pImgIR在core0的堆上分配空间就可以正常通信,再为pImgIR->pir分配空间就会报错?

  • 对于1.1场景1和1.2场景2: 

    1.`pImgIRLMP` 能否在 `heap0` 上申请内存空间?

    是的,pImgIRLMP可以从heap0分配,前提是在共享内存区域内创建heap0。

    2.是否应该为 `ImgIR` 的 `pImg` 申请内存空间?

    是的,ImgIR内部的PIMG是一个指针,它的内存也必须从共享堆中分配。 

    3.是否应该释放 `pImgIRLMP` (等其它可能的变量)申请的空间?如果应该释放,在何时释放?如何释放?在 core0 处释放还是在 core1 处释放?

    是的,所有动态分配的内存一旦不再使用,应释放。应在核心1上释放。 

    4. 是否有必要在装入链表前进行 `Cache_wb`?读取链表后进行 `Cache_inv`?如果是的话,只对 `pImgIRLMP` 进行相关操作吗?还是 `ImgIRLMP` 的 `pir`  和 `ImgIR` 的 `pImg` 也要进行?

    是的,core0应回写,core1应使结构和缓冲区无效。 

    对于2.单核内核交互: 

    1.`pElem` 能否在 `heap0` 上申请内存空间?

    是的,您可以在heap0上请求内存空间,因为它是单个核心交互

    2.是否应该为 `DataPtQueElem` 的 `pIr` 、`pImg2``pImg3`申请内存空间?

    否,因为它们指向静态缓冲区。 

    3.是否应该释放 `pElem` (等其它可能的变量)申请的空间?如果应该释放,在何时释放?如何释放?在 send.cpp 处释放还是在 recv.cpp 处释放?

    是,在recv.cpp上发布。 

    4.是否有必要在装入队列前进行 `Cache_wb`?读取队列后进行 `Cache_inv`?如果是的话,只对 `pElem` 进行相关操作吗?还是 `DataPtQueElem` 的 `pIr`、 `pImg2` 和 `pImg3` 也要进行?

    不,这主要用于多核场景。 

  • 感谢您的详细回复,可以请您帮忙看看第二个问题吗

    请问这是什么原因呢?为什么单单为pImgIR在core0的堆上分配空间就可以正常通信,再为pImgIR->pir分配空间就会报错?您觉得能在核的本地堆上为核间通信进行内存分配吗?
  • 如果您有需要,我也可以将整个工程打包发出来