Distributable Task / Channels
Posted: Tue May 16, 2017 7:03 am
Hi all,
first of all I'm sorry - I realize similar questions have already been asked before, but I could not really find the answer I'm looking for:
I'm currently running into the problem, that on one tile I'm exceeding the number of availbe channels ends. I have a task running on tile 0 serving an interface array. Four of the tasks using these interfaces are running on tile 2 which exceeds the channel limits. Therefore I had the idea to implement a small distributable proxy running on tile 2, taking one client interface connected to the task on tile 0 and 4 server interfaces connecting to the client tasks on tile 2. I thought or hoped, that a distributable task serving only clients on the same tile should not consume any communication channels. Instead my hope was, that the one common interface to tile 0 is shared between all the client tasks and protected by some locks. Unfortunately the number of used channel ends on tile 2 did not decrease but in fact increase.
Just out of curiosity I also connected a client task running on a different tile to the distributable task. Except for the number of channel ends which are still above the limit, this worked. So I guess, the [[distributable]] attribute is more a hint to the compiler but not a strict command (like the inline keyword in C). I assume, that also in the case where all my clients are running on the same tile as the proxy task, the task is not distributed.
Now my 2 questions:
1.) Is it possible to force the compiler to actually distribute a distributable task and otherwise raise an error if it's not possible (for example due to different tiles)?
2.) If a task is purely distributed, do I see it correctly that it should not consume any channels for the communication with the clients? Instead, the interface function calls are directly executed as conventional function calls? (I.e. no RPC as in the case of tasks running on a different tile).
As a work around I came up with this idea and would like to have your oppinion:
Instead of having a proxy task serving an interface array, I will implement a global method for each interface call. I will store the interface identifier of the connection to the server running on tile0 in a "global" variable ("global" per tile as there is no shared memory between tiles) together with a lock variable. Within each function I will cast the identifier back to the client interface and use it (after acquiring the lock). It's not a very elegant solution because then all the client tasks depend on the proxy functions, but I guess it could work. Since I'm not sure if I can just cast between the interface identifier (int) and the interface itself I might need a pure C wrapper for each function which itself is then implemented in XC. The C function could read the global variable and pass it as the interface to the underlaying XC function. Thanks to the helper macros provided by XMOS passing an int from C as an interface to XC is easy and I've already done it before. What do you think?
Thanks a lot!
Best regards,
Max
first of all I'm sorry - I realize similar questions have already been asked before, but I could not really find the answer I'm looking for:
I'm currently running into the problem, that on one tile I'm exceeding the number of availbe channels ends. I have a task running on tile 0 serving an interface array. Four of the tasks using these interfaces are running on tile 2 which exceeds the channel limits. Therefore I had the idea to implement a small distributable proxy running on tile 2, taking one client interface connected to the task on tile 0 and 4 server interfaces connecting to the client tasks on tile 2. I thought or hoped, that a distributable task serving only clients on the same tile should not consume any communication channels. Instead my hope was, that the one common interface to tile 0 is shared between all the client tasks and protected by some locks. Unfortunately the number of used channel ends on tile 2 did not decrease but in fact increase.
Just out of curiosity I also connected a client task running on a different tile to the distributable task. Except for the number of channel ends which are still above the limit, this worked. So I guess, the [[distributable]] attribute is more a hint to the compiler but not a strict command (like the inline keyword in C). I assume, that also in the case where all my clients are running on the same tile as the proxy task, the task is not distributed.
Now my 2 questions:
1.) Is it possible to force the compiler to actually distribute a distributable task and otherwise raise an error if it's not possible (for example due to different tiles)?
2.) If a task is purely distributed, do I see it correctly that it should not consume any channels for the communication with the clients? Instead, the interface function calls are directly executed as conventional function calls? (I.e. no RPC as in the case of tasks running on a different tile).
As a work around I came up with this idea and would like to have your oppinion:
Instead of having a proxy task serving an interface array, I will implement a global method for each interface call. I will store the interface identifier of the connection to the server running on tile0 in a "global" variable ("global" per tile as there is no shared memory between tiles) together with a lock variable. Within each function I will cast the identifier back to the client interface and use it (after acquiring the lock). It's not a very elegant solution because then all the client tasks depend on the proxy functions, but I guess it could work. Since I'm not sure if I can just cast between the interface identifier (int) and the interface itself I might need a pure C wrapper for each function which itself is then implemented in XC. The C function could read the global variable and pass it as the interface to the underlaying XC function. Thanks to the helper macros provided by XMOS passing an int from C as an interface to XC is easy and I've already done it before. What do you think?
Thanks a lot!
Best regards,
Max