After several attempts to use a Raspberry Pi Pico with their own SDK, and then a PJRC Teensy 4.1 with the Arduino framework, both of which failed at the USB Audio stage for different reasons in addition to the Pi Pico not having enough "juice" anyway, I ended up with an XK-AUDIO-316-MC-AB demo board as depicted here:
https://www.xmos.com/develop/usb-multichannel-audio/
I downloaded all of the resources I could find for it, and found a wonderful getting started tutorial in 2. INSTALL TOOLS -> Linux 64-bit 15.2.1 -> XMOS/XTC/15.2.1/doc/html/index.html. That got me to make some really simple programs to prove that everything works in the first place (it does), and show how multithreaded communications work. And it appears from that, that XMOS has done an amazing job of making a non-standard architecture look and feel as close to standard as possible while still keeping the special features that necessarily make it non-standard, and a wonderful job as well, of documenting all of that to make it easy to understand.
Great! Now to look at using it for audio...and that's where things fall apart for me.
There's an app note - AN01008: Extending USB Audio with Digital Signal Processing - linked from the page above, that makes it seem like there's a complete, working, configurable project that connects everything on the dev board to a many-channel USB sound card, plus a "put your code here" section that can be left alone for no processing at all, or filled up with DSP work. Again, great! But all I can find is the USB Audio Software project and the corresponding USB Audio User Guide, also linked from the same page. That is much harder to understand than the Getting Started examples and explanation! Page 2 of AN01008 has a link to http://github.com/xmos/sw_usb_audio.git, which appears to be that same project again, which still doesn't match what I understand from the rest of the app note.
The USB Audio User Guide says on page 40 that I may have to disable some physical I/O to free up a core for DSP, which is fine because I'm only interested in the analog converters anyway. But it's not quite what AN01008 seems to say, about an entirely single-threaded (Figure 2 on page 6) and entirely functional starting project, with the "your code here" section for DSP as an overridden void UserBufferManagement(unsigned[], unsigned[]); Being entirely single-threaded and functional to start with, would of course leave all of the other cores and tiles free for my own functionality, which would be awesome!
Did I misunderstand something?
Getting started with Audio: USB <-> DSP <-> many-channel analog
-
- New User
- Posts: 2
- Joined: Wed Sep 11, 2024 1:14 pm
-
Verified
- XCore Legend
- Posts: 1070
- Joined: Thu Dec 10, 2009 9:20 pm
- Location: Bristol, UK
That sounds about right :)that makes it seem like there's a complete, working, configurable project that connects everything on the dev board to a many-channel USB sound card, plus a "put your code here" section that can be left alone for no processing at all
The USB Audio project uses many threads (configurable based on what features you enable). In the xcore world "everything" is software - so, for example, the S/PDIF transmit interface is implemented in software running in a (hardware) thread. So, by disabling hardware interfaces you are essentially freeing up threads and MIPs for more DSP processing.
You can put some (very) simple DSP in the UserBufferManagement(unsigned[], unsigned[]); but most projects offload samples to other processing threads at this point - typically buffering before before performing some sort of block-based processing.
It's becoming increasingly popular to integrate DSP into the design so we are considering adding this blocking into the design proper.
You might like to look at Integrating Audio Weaver (AWE) Core into USB Audio for an example of how we integrated some substantial DSP (and control of said DSP) into the design.
Hope that's helpful
Technical Director @ XMOS. Opinions expressed are my own
-
Verified
- XCore Legend
- Posts: 1070
- Joined: Thu Dec 10, 2009 9:20 pm
- Location: Bristol, UK
From AN01008:
This could be made more clear I guess! By "compute" it means threads.As XCORE is a concurrent multi-threaded multi-core processor, there are other threads and cores available for DSP. It depends on the precise configuration of the USB stack (whether you use special interfaces such as S/PDIF, ADAT, MIDI) but in a simple case with just I2S, USB Audio uses around 30% of the compute, with one tile being completely empty.
Technical Director @ XMOS. Opinions expressed are my own
-
- New User
- Posts: 2
- Joined: Wed Sep 11, 2024 1:14 pm
Ross wrote: ↑Wed Sep 18, 2024 6:36 pm The USB Audio project uses many threads (configurable based on what features you enable). In the xcore world "everything" is software - so, for example, the S/PDIF transmit interface is implemented in software running in a (hardware) thread. So, by disabling hardware interfaces you are essentially freeing up threads and MIPs for more DSP processing.
So Figure 2 on page 6 of AN01008 is *not* the entire project, but only the USB part? And every I/O format is its own (hardware) thread that runs in parallel with that, and is never shown or mentioned at all? That could have been clearer too.Ross wrote: ↑Wed Sep 18, 2024 6:45 pm From AN01008:
This could be made more clear I guess! By "compute" it means threads.As XCORE is a concurrent multi-threaded multi-core processor, there are other threads and cores available for DSP. It depends on the precise configuration of the USB stack (whether you use special interfaces such as S/PDIF, ADAT, MIDI) but in a simple case with just I2S, USB Audio uses around 30% of the compute, with one tile being completely empty.
.
I was indeed planning to offload the DSP work to other threads. Keep the USB thread as-is, and keep the I2S/TDM thread(s) and related chip management. Delete everything else. Eventually, I'm going to spin my own board that only has that, once I figure out what I'm doing. :-)
Reserve one thread for communication with an external "system coordinator" (DSP coefficients, level meters, etc.), and the rest are all available for DSP.
A big part of why I ended up with XMOS at all - instead of, say, an Analog Devices dedicated DSP chip with their graphical signal-flow IDE - was the specific *absence* of buffering or block processing. I have a project in mind that essentially amounts to a NON-isolating hearing aid (among lots of other things while I'm at it), and I don't want to introduce an acoustic comb filter if I can avoid it. So, I'm looking at a higher-than-usual sample rate, just so the converters can use a less-aggressive filter with fewer samples of latency *in addition to* less time between samples, and I want the entire DSP chain to only have one sample in it at a time, per channel. One sample all the way through, then next sample all the way through, etc.
Yes, it's less efficient that way, so I'll have to see how much I can do, but that's the idea.
I'm almost resigned to writing my own DSP functions as well, which will likely reduce the efficiency even more, since every library I've seen so far has been either restrictively licensed or hard-coded for block processing. If you know of one that satisfies those requirements (or if XMOS actually *has* one that I haven't seen yet), I'd be interested in that too.
Thanks!