Treczoks wrote:The optimisation advantages are obvious - not having "volatile" stuff makes register dispatch easier.
What does this mean? We weren't talking about volatile, and I have
no idea what you mean by "register dispatch"?
But what do you mean with "prove"?
I meant both assertions that the compiler needs to be able to safely
do certain transforms; and things it wants to derive for the outside
world ("this code always runs in at least 16 and at most 20 cycles").
If you are talking timing analysis I'd say that memory accesses are way easier to time than channel IO - you never know if the channel blocks for some reason, and you're forced to set an execution time for every access that might work, without a guarantee that this really comes to pass.
Channel I/O is very predictable, too, as long as you know what code
is running at the other end of the channel (and you do). Channels do
not magically block, they do not have a temper.
segher wrote:It isn't very often something that gets in your way, because it is *faster*
to pass data via a channel than it is to pass it through memory.
If you reduce it on the single instruction of reading from a memory cell vs. reading from a channel, then you're right. If you have to build an infrastructure (A send to B: give me the data, B sends data back), I doubt it. Besides, it adds load to the writing process without need.
Doubt what you want; how about actually trying it out? Complaining
about things unfamiliar to you does not make you more familiar with
it, and neither does it change how well things perform.
Memory is good for data you want to put aside for a while. Channels
are good for data that you want to keep flowing. You usually want to
keep things flowing, if at all possible.
I do understand the reasons for and implications of keeping the threads independend. Nonetheless, the lack of shared memory leaves a heavy taste of incompleteness.
But you _can_ easily use "raw" memory; just don't use XC for that!