Corrupted Microphone Data Topic is solved

New to XMOS and XCore? Get started here.
saul01
Member
Posts: 14
Joined: Thu Feb 16, 2017 12:57 am

Corrupted Microphone Data

Postby saul01 » Sat Mar 11, 2017 2:41 am

Hi
I have modified the HiResDelay and Sum example to use 4 microphones only. The rate of the Decimator is 48KHz.
I have verified that the data rate of the Microphone task is correct by replacing the Mic data with a sine wave that has 100 samples per cycle. I collect 16K points of data which are sent in small 64-point buffers via USB-VCom port to a PC, the PC plots the data (i.e with Python) and I can see 100 samples per cycle. Now, if I use the Mic data, the plot looks really bad, there are about 20 samples gone bad every ~64 . Anybody has seen something similar before?
I will post my code shortly.
Thanks.
View Solution
User avatar
infiniteimprobability
XCore Expert
Posts: 765
Joined: Thu May 27, 2010 10:08 am
Contact:

Postby infiniteimprobability » Mon Mar 13, 2017 2:53 pm

Haven't seen anything exactly like this but it sounds like you have a fairly heavily modded design. Is the virtual com port implemented inside the xmos chip?
4 channels @ 32b @ 48KHz is 6.1Mbps. So not exactly fast for a bulk endpoint (and certainly no issue for xmos internally), but bulk throughput can be affected by other bus users. However, it's odd that a sinewave makes it through OK.
What size of frame do you use in lib_mic_array? You need to make sure your receiving task is not blocking for long periods..

The mic_array_get_next_time_domain_frame callback assume that you are ready to service it immediately. There is a check, enabled by DEBUG_MIC_ARRAY in the mic_array_conf.h file which can check timing for you
saul01
Member
Posts: 14
Joined: Thu Feb 16, 2017 12:57 am

Postby saul01 » Mon Mar 13, 2017 11:57 pm

1. Virtual COMM port is implemented in the xmos chip, although I use it to transmit "bursts" of data. This task makes use of "movable buffers"
2. I am only concerned with one microphone channel, so it is 1 channel @32b 48KHz.
3. The size of lib_mic_array frame: 2.

Maybe I have a bad architecture. You mentioned that mic_array_get_next_time_domain_frame(), when returning "my code needs to be ready", I think is ready, since replacing the data with a sine wave is all fine.

I did more digging and notice that when I "stream" the data out from the i2s_handler "send" to an auxiliary task (different DSP tile), the problem appears; everything is clean when I don't stream the data out. I don't know why this would happen; is as if the streaming channel corrupts other DSP core operations.
saul01
Member
Posts: 14
Joined: Thu Feb 16, 2017 12:57 am

Postby saul01 » Tue Mar 14, 2017 12:22 am

Ok, so I enabled the debug flag you mentioned and right when I start all the cool stuff (aggregating data and sending to the PC) I get the "Timing not met: decimators not serviced in time"
Is not the code running in different DSPs? Why is it running out of juice?
User avatar
infiniteimprobability
XCore Expert
Posts: 765
Joined: Thu May 27, 2010 10:08 am
Contact:

Postby infiniteimprobability » Tue Mar 14, 2017 9:32 am

I don't know why this would happen; is as if the streaming channel corrupts other DSP core operations.

A streaming channel gives you at least a one level deep buffer (more if it spans tiles) so it can soak up some jitter, but you still must get 2 samples in every two sample periods.

Is not the code running in different DSPs? Why is it running out of juice?

I cannot tell without seeing your architecture, but it certainly sounds like a timing error. Are you using the DSP task architecture of:

1) exchange samples (get new ones and send old processed ones)
2) do the dsp
3) loop back to 1
saul01
Member
Posts: 14
Joined: Thu Feb 16, 2017 12:57 am

Postby saul01 » Tue Mar 14, 2017 9:42 pm

(1) I don't quite follow you on "you must get 2 samples in every two sample periods"
(2) Yes, my architecture employs several tasks with a while(1) loop and a select block.

The code is now working, but I think is by pure chance of me moving task instantiation order. I am sure there is a "good way" of using the tools and moving data around DSPs and Tiles... I'm not there yet.
User avatar
infiniteimprobability
XCore Expert
Posts: 765
Joined: Thu May 27, 2010 10:08 am
Contact:

Postby infiniteimprobability » Wed Mar 15, 2017 12:08 pm

saul01 wrote:(1) I don't quite follow you on "you must get 2 samples in every two sample periods"

If you have a system with a two level deep buffer, it means once the buffer is full, you can get away with not providing any samples for two sample periods. Then you must supply two to avoid overflow. Don't worry too much, the point was that streaming channels help with jitter (due to buffering) but fundamentally you need to keep up with the consumption of data.
(2) Yes, my architecture employs several tasks with a while(1) loop and a select block.

OK
The code is now working, but I think is by pure chance of me moving task instantiation order. I am sure there is a "good way" of using the tools and moving data around DSPs and Tiles... I'm not there yet.
[/quote]
That's good to hear. The tools can help you close timing on particular code fragments (see XTA) and reserving logical cores as DSP engines is a good way of doing things. The key thing is to know that channel or interface communication backs up (it's all synchronised) if one step is holding everything up. Depending on whether you are doing block based or sample by sample DSP, it may make sense to move pointers around over channels/interfaces rather than data.

Return to “Getting started”

Who is online

Users browsing this forum: No registered users and 3 guests