non-blocking communication between tasks

Technical questions regarding the XTC tools and programming with XMOS.
Post Reply
johnswenson1
Member++
Posts: 16
Joined: Sun Jul 14, 2013 5:14 am

non-blocking communication between tasks

Post by johnswenson1 »

I'm putting together a lighting control system with multiple boards talking to each other, I have all the boards talking properly, but there is one issue I have no idea how to implement.

In the "hub" box I have two tasks (running on the same tile), one outputting DMX data and one talking to the "remote" boxes. The DMX task is continuously outputting all the DMX data, it is not "message" based, all the data is continuously sent out, at least 44 times a second. It just reads the data for each light from an array and just loops through all the data.

The other task is dealing with button pushes, timers etc, it figures out when something has changed and needs to update the data in the above array.

I cannot figure out how to do this. The system will not let you have the same array passed from one task to another, even when only one writes and the other reads. I can't use things like interfaces to send the data because they are blocking, the DMX output task would have to block waiting for a change, but it can't do that, it has to continuously send out the data.

I looked at notifications, but you still need a blocking select to read them, they are only non-blocking on the sending side not the receiving side.

If I can't share data and I can't have non-blocking communication, I don't know how I can do this.

Polling data between tasks would work, the DMX loop is fast enough that it can check if there is an update each time through its loop, but again it doesn't look like any such thing exists.

I can't have either of the tasks blocking waiting for the other.

Does anybody have any clue how I can either have intertask shared memory (one task writing, the other reading) or non blocking communication between tasks?

Thanks,

John S.


User avatar
sethu_jangala
XCore Expert
Posts: 589
Joined: Wed Feb 29, 2012 10:03 am

Post by sethu_jangala »

You can use dual buffers one for writing and one for reading. Something like the below

Code: Select all

par
{
  read_data(buf1);
  write_data(buf2);
}
par
{
  read_data(buf2);
  write_data(buf1);
}
User avatar
segher
XCore Expert
Posts: 844
Joined: Sun Jul 11, 2010 1:31 am
Contact:

Post by segher »

On the controllers, send to a channel. On the hub, read
from the channels, in a "select" loop. This can be your
main loop.

Reads can be non-blocking (that's what events are all about);
writes can never be non-blocking (unless you don't mind
losing data; anything not yet received has to be stored
_somewhere_!)
richard
Respected Member
Posts: 318
Joined: Tue Dec 15, 2009 12:46 am

Post by richard »

One way to deal with this situation is to introduce a task to manage the data. You would define two interfaces - one that lets you write some data and another that lets you read it. The task managing the data would take two server interfaces as arguments and it would contain a loop that selects on receiving either a read request or a write request which it then services.

You should be able to mark the data manager task as distributable (see here) in which case it won't use up any additional cores on the device.

At the machine code level this will compile down to the following: When the writing task tries to write data it will acquire a lock, write the data and then release the lock. When the reading tasks wants to read data it will acquire a lock, read the data and then release the lock.
johnswenson1
Member++
Posts: 16
Joined: Sun Jul 14, 2013 5:14 am

Post by johnswenson1 »

segher wrote:On the controllers, send to a channel. On the hub, read
from the channels, in a "select" loop. This can be your
main loop.

Reads can be non-blocking (that's what events are all about);
writes can never be non-blocking (unless you don't mind
losing data; anything not yet received has to be stored
_somewhere_!)
Aha! That is the solution. The select IS the main loop, with the body of what is happening in the "default" case of the select. Each time through it checks to see if there is anything waiting on the interface, if yes it handles it, if not the main body is run.

Since the main body is running much faster than someone actually pushing buttons there should never be a loss of data.

Thanks,

John S.
johnswenson1
Member++
Posts: 16
Joined: Sun Jul 14, 2013 5:14 am

Post by johnswenson1 »

I tried the default in the select and it works perfectly. The whole system is now working exactly the way I wanted it to.

Thanks everyone for the responses. This is one of the best places to get help.

John S.
User avatar
davelacey
Experienced Member
Posts: 104
Joined: Fri Dec 11, 2009 8:29 pm

Post by davelacey »

Following on from Richard's reply. I've written some code below that shows how to share memory between two tasks. It uses an intermediate task to manage the memory between the two. This makes it explicit that the sharing is going on (along with the atomicity of access etc.).

Since the manager task is distributable, if everything is on the same tile it will compile down to shared memory access with the use of locks to ensure consistency. So the following program will only use 2 logical cores.

Of course, the interface to the intermediate task need be a straight memory access interface - it could be something more application specific.

Dave

Code: Select all

#include <stddef.h>
#include <print.h>
#include <timer.h>

interface mem_if {
  unsigned get_element(size_t index);
  void write_element(size_t index, unsigned value);
};

[[distributable]]
void memory_manager(server interface mem_if i[n], size_t n,
                    static const size_t num_elements)
{
  unsigned data[num_elements];
  while (1) {
    select {
    case i[int j].get_element(size_t index) -> unsigned result:
      result = data[index];
      break;
    case i[int j].write_element(size_t index, unsigned value):
      data[index] = value;
      break;
    }
  }
}

void task1(client interface mem_if mem)
{
  while (1) {
    printuintln(mem.get_element(0));
  }
}

void task2(client interface mem_if mem)
{
  unsigned x = 0;
  while (1) {
    delay_ticks(1000);
    mem.write_element(0, x);
    x++;
  }
}

int main() {
  interface mem_if i[2];
  par {
    memory_manager(i, 2, 5);
    task1(i[0]);
    task2(i[1]);
  }
  return 0;
}
johnswenson1
Member++
Posts: 16
Joined: Sun Jul 14, 2013 5:14 am

Post by johnswenson1 »

Thanks Dave,
that makes a lot of sense. The memory manager owns the memory and services all write or read requests, nice and elegant. All the other tasks just block for the time it takes for the manager to actually write or read the memory.

John S.
Post Reply