Arrays in single tile

New to XMOS and XCore? Get started here.
Post Reply
ag14774
New User
Posts: 2
Joined: Tue Nov 03, 2015 8:12 pm

Arrays in single tile

Post by ag14774 »

Hi,

If I have some data that I want to split between processes on the *same* tile, is the best approach to keep all the data in a single array in a single farmer process and have the processes access/modify different parts of it by just passing a reference to the processes OR never store the array in the farmer in the first place, and just send the data to the worker threads through a streaming chan and collect them back after the work is done? I am using xcore-200 explorer kit. If I understand the architecture correctly, in both cases, the data will be stored in the same place since threads in a single tile share memory(http://www.xmos.com/files/images/cXE216.png ????)

Regards,
ag14774
Last edited by ag14774 on Wed Nov 04, 2015 9:54 pm, edited 1 time in total.


User avatar
infiniteimprobability
XCore Legend
Posts: 1126
Joined: Thu May 27, 2010 10:08 am
Contact:

Post by infiniteimprobability »

There are several ways to approach this. It all depends on how much data and the rate/latencies involved. Coding style comes into it too - do you want it to be an elegant piece of CSP https://en.wikipedia.org/wiki/Communica ... _processes using either channels or interfaces or something that is more c-style.

I personally think the ideal would be to write it using interfaces and then consider making the farmer task distributable, which would allow the compiler to optimise out that task and distribute it over the clients and servers. This can save you using a whole core just for the farmer.

But then this all depends on data sizes and rates.. It may be that using streaming channels and/or references to shared buffers is the way forward for performance reasons.

Some discussions on sharing memory here - http://www.xcore.com/forum/viewtopic.php?f=26&t=3061
ag14774
New User
Posts: 2
Joined: Tue Nov 03, 2015 8:12 pm

Post by ag14774 »

Thanks for the reply.

If I were to use interfaces, that would mean that the farmer would hold a server interface. How can I use that to send a reference of the array holding the data to each worker? I want the worker threads to do some computation with the data(it could be a large volume of data) and I want it to be in parallel. If I do the processing in the farmer and then set it as distributable, does that mean that I could have my worker threads call the interface, and the computation would happen in parallel in each worker or is it happening sequentially in the order that the interface is being called? The processed data are going to be written in many small arrays and then merged to the big array to prevent race conditions.
User avatar
infiniteimprobability
XCore Legend
Posts: 1126
Joined: Thu May 27, 2010 10:08 am
Contact:

Post by infiniteimprobability »

If I were to use interfaces, that would mean that the farmer would hold a server interface.
It could be either way around. Normally the worker would be the server and do things on request, but equally the farmer could be the server and effectively have a callback for "give me more to do".

How can I use that to send a reference of the array holding the data to each worker?
There are various examples of doing this. Movable pointers are the safest and most explicit way of doing this. Either look at the xc programming guide https://download.xmos.com/XM-004440-PC- ... 03LnBkZiJd or see the above link for sharing memory in the previous reply..

I want the worker threads to do some computation with the data(it could be a large volume of data) and I want it to be in parallel. If I do the processing in the farmer and then set it as distributable, does that mean that I could have my worker threads call the interface, and the computation would happen in parallel in each worker or is it happening sequentially in the order that the interface is being called?
Distributable means that the compiler can potentially inline the code, rather than calling across to a different core. However, things are still synchronised. I'm not 100% sure I get what you mean in this case actually... Are you trying to achieve a fork to parallel tasks with barrier sychrnoisation? If so, you could just have a local par{} and pass/return values to the spawned tasks.. Ie. dynamically create and destroy the workers on each iteration.
Post Reply