Bidirectional Channels

Technical questions regarding the XTC tools and programming with XMOS.
User avatar
rp181
Respected Member
Posts: 395
Joined: Tue May 18, 2010 12:25 am

Bidirectional Channels

Post by rp181 »

I have a chain of processors, with a streaming chanel configuration like this:

core 0 --> core 1 --> core 2 --> core 3 --> core 4

Cores 0-4 are constantly streaming data to 4, but 4 will sometimes have to send data to all of the other cores. What happens when there is data waiting to be read on a channel, but data is written to it? Is there two seperate buffers for input and output on each end?

I wrote a quick test program and simulated it, and it seems to work, but I just wanted to make sure I'm not missing anything.


User avatar
segher
XCore Expert
Posts: 844
Joined: Sun Jul 11, 2010 1:31 am

Post by segher »

Yes, send and receive channel ends are completely independent
(other than sharing a resource id :-) )

It's a little different on xlinks: suppose A sends a lot of data to B,
then B has to send a lot of credits to A. This affects the achievable
bandwidth from B to A (albeit not by much); but it will never make
things deadlock, which is your prime concern if I understand
correctly?
User avatar
rp181
Respected Member
Posts: 395
Joined: Tue May 18, 2010 12:25 am

Post by rp181 »

Correct! Another question, when using a select guard for two channel reads, how does one guarentee that one channel will be addressed if data is available? I have it as my first case, but for some reason it is never going into that one, always the other one.

Is there a way to check if the streaming channel has data available?
User avatar
segher
XCore Expert
Posts: 844
Joined: Sun Jul 11, 2010 1:31 am

Post by segher »

There are some program switch registers that can show you
when data is going through the channel ends, but you should
only use those for debug (very handy for that though :-) )

It sounds like what is happening is that you are overloading
your thread, by giving it more data than it can handle, from
two producer threads. The event system does not do load
balancing, it only provides very low latency, so if there always
is data available on channel A, and you select on channel A
and channel B, you always get channel A.

Most of the time this isn't something you need to worry about;
most of the time you have a bigger problem than fairness if
you cannot handle all incoming data. But if you do, you can
do something like this:

Code: Select all

select A and B {
        case A: handle A;
                select B {
                        case B: handle B;
                        default: ;
                }

        case B: handle B;
                select A {
                        case A: handle A;
                        default: ;
                }
}
That is, the outer select blocks waiting for A and B; then if
(say) it handles A, it non-blocking checks if it can handle B
as well. This guarantees (almost) 50/50 fairness. If you have
more events to load balance, or you want to give one of-em
higher priority, etc., you can do more complicated schemes
as well.

Or perhaps you should use two threads to handle the incoming
data, one for each channel.
richard
Respected Member
Posts: 318
Joined: Tue Dec 15, 2009 12:46 am

Post by richard »

rp181 wrote:Correct! Another question, when using a select guard for two channel reads, how does one guarentee that one channel will be addressed if data is available?
If you just want to prioritise one case over another then you can use the ordered pragma to specify that earlier cases should be prioritised over later cases (see https://www.xmos.com/node/16662, example code is also be available in the xSOFTip Browser inside the xTIMEcomposer).