The following two options come to my mind:
- Split audio processing blocks into parallel tasks on multiple cores. For example a multi channel EQ.
- Process an audio block in one core and pass the output to the next core for processing the second block when ready and so on.
From what I understand option two is better alligned to the design principles for xmos, but It means that I would need to restructure my codebase. I would also introduce an audio delay for every additional core I add to the chain.
Are both options possible to implement? Which one would you recommend to go for?