and:A destination channel end can be shared by any number of outputting threads;
they are served in a round-robin manner. Once a connection has been established it
will persist until an END is received; any other thread attempting to establish a
connection will be queued. In the case of a shared channel end, the outputting thread
will usually transmit the identifier of its channel end so that the inputting thread
can use it to reply.
Can the following two properties be satisfied at the same time?Note that when sending long messages to a shared channel, the sender should
send a short request and then wait for a reply before proceeding as this will minimise
intercon- nect congestion caused by delays in accepting the message.
1) Multiple client channels on multiple tiles sending to one shared server channel,
where the client knows the server's channel id ahead of time, but the client must send
its channel id to the server for each stream.
2) Clients send a short request and then wait for a reply before proceeding. The link
needs to be free while the clients are waiting for a reply.
Consider the following topology: Two XS1-L1s (an L2) with all but one link disabled. There
is one bidirectional link between tile[0] and tile[1]. The reason for disabling the other
three links is to make deadlocks easier to spot.
Code: Select all
tile[0] <-> tile[1]
not related to channel operations.
client_0 is waiting on a reply from server_0, which has not yet begun execution.
server_0's channel id is 0x40000102 and has already been allocated with getr before
any threads have started executing. client_0 sends its channel as three data tokens,
a pause, then waits for an END from server_0.
client_1 sends a request to server_1, but the reply from server_1 is never delivered,
so client_1 waits on chkct, END forever.
Code: Select all
tile[0]@1- -p-A-p-p-.----.00010202 (client_0 + 10) : getr r5(0x2), 0x2 @16346
tile[0]@2- -p-p-A-p-.----..00010202 (client_0 + 10) : getr r5(0x1102), 0x2 @16347
tile[0]@3- -p-p-p-A-.----...00010202 (client_0 + 10) : getr r5(0x1502), 0x2 @16348
tile[0]@1- -p-A-p-p-.----.00010220 (client_0 + 2e) : setd res[r5(0x2)], r0(0x40000102) @16402
tile[0]@2- -p-p-A-p-.----..00010220 (client_0 + 2e) : setd res[r5(0x1102)], r0(0x40000102) @16403
tile[0]@3- -p-p-p-A-.----...00010220 (client_0 + 2e) : setd res[r5(0x1502)], r0(0x40000102) @16404
tile[0]@1- -p-A-p-p-.----.00010222 (client_0 + 30) : outt res[r5(0x2)], r8(0x0) @16406
tile[0]@2- -p-p-A-p-.----..00010222 (client_0 + 30) : outt res[r5(0x1102)], r8(0x11) @16407
tile[0]@3- -p-p-p-A-.----...00010222 (client_0 + 30) : outt res[r5(0x1502)], r8(0x15) @16408
tile[0]@1- -p-A-p-p-.----.00010224 (client_0 + 32) : outt res[r5(0x2)], r7(0x0) @16410
tile[0]@2- -p-p-A-p-.----..00010224 (client_0 + 32) : outt res[r5(0x1102)], r7(0x0) @16411
tile[0]@3- -p-p-p-A-.----...00010224 (client_0 + 32) : outt res[r5(0x1502)], r7(0x0) @16412
tile[0]@1- -p-A-p-p-.----.00010226 (client_0 + 34) : outt res[r5(0x2)], r6(0x0) @16414
tile[0]@2- -p-p-A-p-.----..00010226 (client_0 + 34) : outt res[r5(0x1102)], r6(0x0) @16415
tile[0]@3- -p-p-p-A-.----...00010226 (client_0 + 34) : outt res[r5(0x1502)], r6(0x0) @16416
tile[0]@1- -p-A-p-p-.----.00010228 (client_0 + 36) : outct res[r5(0x2)], 0x2 @16418
tile[0]@2- -p-p-A-p-.----..00010228 (client_0 + 36) : outct res[r5(0x1102)], 0x2 @16419
tile[0]@3- -p-p-p-A-.----...00010228 (client_0 + 36) : outct res[r5(0x1502)], 0x2 @16420
tile[0]@1-P-p-A-p-p-.----.0001022a (client_0 + 38) : chkct res[r5(0x2)], 0x1 @16422
tile[0]@2-P-p-w-A-p-.----..0001022a (client_0 + 38) : chkct res[r5(0x1102)], 0x1 @16423
tile[0]@3-P-p-w-w-A-.----...0001022a (client_0 + 38) : chkct res[r5(0x1502)], 0x1 @16424
tile[0]@0- -A-w-w-w-.----00010468 (server_1 + d2) : getr r0(0x103), 0x3 @30365
tile[0]@0-P-A-w-w-w-p-.----00010532 (server_1+ 50) : testct r0(0x0), res[r5(0x402)] @30569
tile[1]@0- -A-.----00010156 (client_1 + e) : getr r4(0x40000002), 0x2 @78804
tile[1]@0- -A-.----00010162 (client_1 + 1a) : setd res[r4(0x40000002)], r0(0x402) @78820
tile[1]@0- -A-.----00010168 (client_1 + 20) : outt res[r4(0x40000002)], r0(0x0) @78832
tile[1]@0- -A-.----0001016e (client_1 + 26) : outt res[r4(0x40000002)], r11(0x0) @78844
tile[1]@0- -A-.----00010172 (client_1 + 2a) : outt res[r4(0x40000002)], r6(0x40) @78852
tile[1]@0- -A-.----00010174 (client_1 + 2c) : outct res[r4(0x40000002)], 0x2 @78856
tile[1]@0-P-A-.----00010176 (client_1 + 2e) : chkct res[r4(0x40000002)], 0x1 @78860
tile[0]@0- -A-w-w-w-we.----00010532 (server_1+ 50) : testct r0(0x0), res[r5(0x402)] @78861
tile[0]@0- -A-w-w-w-we.----00010536 (server_1+ 54) : int r0(0x0), res[r5(0x402)] @78869
tile[0]@0- -A-w-w-w-we.----0001053a (server_1+ 58) : int r1(0x0), res[r5(0x402)] @78877
tile[0]@0- -A-w-w-w-we.----00010540 (server_1+ 5e) : int r1(0x40), res[r5(0x402)] @78889
tile[0]@0- -A-w-w-w-we.----00010548 (server_1+ 66) : setd res[r5(0x402)], r0(0x40000002) @78905
tile[0]@0- -A-w-w-w-we.----0001054a (server_1+ 68) : outct res[r5(0x402)], 0x1 @78909
tile[0]@0-P-A-w-w-w-we.----0001054c (server_1+ 6a) : testct r0(0x0), res[r5(0x402)] @78913
destination with setd to that of a pre-defined server channel id, and have the server
reply using the channel id that the client sent with its request. I also want the client to
make sure that the server is ready to receive the request before proceeding, so as
not to deadlock threads sharing the single link.
I can use a lock on each client to make sure that only one request goes out at a time,
but this does not work when there are multiple clients on multiple tiles making requests
to one shared server channel, as they cannot use the lock across tiles.