Streaming channels and "plain" channels
-
- Member++
- Posts: 19
- Joined: Tue Sep 27, 2011 11:49 am
Got it. Thank you Bianco for all this information!
-
- XCore Expert
- Posts: 754
- Joined: Thu Dec 10, 2009 6:56 pm
One streaming channel can provide up to 400MB/s bandwidth (as much as the ISA can shove).
Of course this speed is not achievable in real applications, but it does show that there is not a lack of bandwidth. You might be able to multiplex multiple data streams into one streaming channel. You can for example send a control token followed by a data token (8 or 32-bit) to tell data streams apart. There are control token values reserved for application use.
This will move much of the latency requirements to the application code (the multiplexer and demultiplexer will have to be in software). It is up to you to find out whether this will meet your requirements.
Of course this speed is not achievable in real applications, but it does show that there is not a lack of bandwidth. You might be able to multiplex multiple data streams into one streaming channel. You can for example send a control token followed by a data token (8 or 32-bit) to tell data streams apart. There are control token values reserved for application use.
This will move much of the latency requirements to the application code (the multiplexer and demultiplexer will have to be in software). It is up to you to find out whether this will meet your requirements.
-
- XCore Expert
- Posts: 956
- Joined: Fri Dec 11, 2009 3:53 am
- Location: Sweden, Eskilstuna
If the process i pipe-line alike, you will probably not put all thread in random. On each core you can have much more streaming channels within the core, or you can use a shared memory-space within that core.
Memory access and instruction access can never collide creating a starved instructionbuffer, but the instruction buffer can become empty if you place several Load/Store after each other on each thread.
Anyway, if it looks like a pipeline, you can use more like a token ring structure, and that will heavily reduce the amount of streaming channels in the interconnect. It is also possible for many threads to share the same channel end, if you do not like a ring solution, needing more a tree solution.
If you are not afraid of ASM you can do much more that in XC regarding channels, and it is easy to mix. You do not need to write all code in ASM.
And as the other told, you can use both the end and the pause token to open up the interconnect. It is not hard wired in a program. You can also free channel resources on the fly, making them available from the channel pool again.
Memory access and instruction access can never collide creating a starved instructionbuffer, but the instruction buffer can become empty if you place several Load/Store after each other on each thread.
Anyway, if it looks like a pipeline, you can use more like a token ring structure, and that will heavily reduce the amount of streaming channels in the interconnect. It is also possible for many threads to share the same channel end, if you do not like a ring solution, needing more a tree solution.
If you are not afraid of ASM you can do much more that in XC regarding channels, and it is easy to mix. You do not need to write all code in ASM.
And as the other told, you can use both the end and the pause token to open up the interconnect. It is not hard wired in a program. You can also free channel resources on the fly, making them available from the channel pool again.
Probably not the most confused programmer anymore on the XCORE forum.