RX SPDIF: Problems retrieving sample rate Topic is solved

Sub forums for various specialist XMOS applications. e.g. USB audio, motor control and robotics.
Post Reply
User avatar
dsteinwe
XCore Addict
Posts: 144
Joined: Wed Jun 29, 2016 8:59 am

RX SPDIF: Problems retrieving sample rate

Post by dsteinwe »

I have written a simple test application to determine the sample rate of the incoming stream on the spdif port. I am using the xCore-200 MC AUDIO board and I feed it with a optical signal from my cd player. The rate should be around 44100Hz, but I calculate only 4900Hz in average with my test application.

The implementation is simple: I use a timer that triggers every seconds. In the meanwhile I count the incoming samples. On a triggered timer event, I output the counted samples and reset the counter.

Here is my code:

Code: Select all

#include <xs1.h>
#include <stdio.h>
#include <platform.h>
#include "spdif.h"

on stdcore[1] : port spdif_rx_port = XS1_PORT_1O;
on stdcore[1] : clock clock_block = XS1_CLKBLK_1;

interface event {
    void triggered();
};

#define FRAME_Y 5

void spdif_rx_handler(streaming chanend c, client interface event event_intf) {
    delay_microseconds(1000);
    while (1) {
        select {
            case c :> uint32_t v:
                uint32_t index = (v & 0xF) == FRAME_Y ? 1 : 0;
                uint32_t sample = (v & ~0xF) << 4;
                // TODO interpret none sample bits
                if (index) event_intf.triggered(); // count only the samples of the right channel
            break;
        }
    }
}

void timed_event_producer(client interface event event_intf) {
    timer t;
    unsigned int time;
    t :> time;
    while (1) {
        select {
            case t when timerafter(time) :> void:
                time += XS1_TIMER_MHZ * 1000 * 1000; // should be 1 second
                event_intf.triggered();
            break;
        }
    }
}

void sample_rate_calculator(server interface event sample_event_intf, server interface event timed_event_intf) {
    int count_samples = 0;
    while (1) {
        select {
            case sample_event_intf.triggered():
                count_samples++;
                break;
            case timed_event_intf.triggered():
                printf("Detected sample rate: %u\n", count_samples);
                count_samples = 0;
                break;
        }
    }
}

int main(void) {
    streaming chan spdif_rx_channel;
    interface event timed_event_intf, sample_event_intf;

    par {
        on stdcore[1]: sample_rate_calculator(sample_event_intf, timed_event_intf);
        on stdcore[1]: timed_event_producer(timed_event_intf);
        on stdcore[1]: spdif_rx_handler(spdif_rx_channel, sample_event_intf);
        on stdcore[1]: spdif_rx(spdif_rx_channel, spdif_rx_port, clock_block, 44100);
    }
    return 0;
}
BTW, I have increased the duration inside the function timed_event_producer from 1s to 10s. I have stopped the output with a normal clock. The difference between board and clock in 10s is about 1.5 seconds. Is this normal?

Do you have some ideas to solve the rate calculating problem?


View Solution
peter
XCore Addict
Posts: 230
Joined: Wed Mar 10, 2010 12:46 pm

Post by peter »

Hi dsteinwe,

I don't have a board in front of me that I can use to test this, but the following seems like a much simpler way to code what you want, to simply count the received and every second print how many samples are on the right channel.

Note how it is key to just increment the time by one second each time and not use the input in the select. This avoids clock drift.

Code: Select all

#include <xs1.h>
#include <stdio.h>
#include <platform.h>
#include "spdif.h"
#include "debug_print.h"

on tile[0] : port spdif_rx_port = XS1_PORT_1O;
on tile[0] : clock clock_block = XS1_CLKBLK_1;

#define FRAME_Y 5

#define ONE_SECOND_TICKS 100000000

void spdif_rx_handler(streaming chanend c) {
  int count = 0;
  timer tmr;
  int time;
  tmr :> time;
  time += ONE_SECOND_TICKS;
  while (1) {
    select {
      case c :> uint32_t v:
        uint32_t index = (v & 0xF) == FRAME_Y ? 1 : 0;
        if (index) {
          count++; // count only the samples of the right channel
        }
        break;
      case tmr when timerafter(time) :> void:
        debug_printf("%d\n", count);
        count = 0;
        time += ONE_SECOND_TICKS;
        break; 
    }
  }
}

int main(void) {
    streaming chan spdif_rx_channel;
    par {
        on tile[0]: spdif_rx_handler(spdif_rx_channel);
        on tile[0]: spdif_rx(spdif_rx_channel, spdif_rx_port, clock_block, 44100);
    }
    return 0;
}
I'm using Tools 14.2 (hence changing "stdcore" to "tile"), and this code uses the much lighter-weight version of printing (debug_printf()) from lib_logging (available on xmos.com or github/xmos).

I added the following config.xscope to my project to ensure xSCOPE printing is enabled:

Code: Select all

<xSCOPEconfig ioMode="basic" enabled="true">
    <Probe name="Control Probe" type="CONTINUOUS" datatype="INT" units="byte" enabled="true"/>
</xSCOPEconfig>
and added the following to my Makefile:

Code: Select all

XCC_FLAGS += -DDEBUG_PRINT_ENABLE=1
USED_MODULES += lib_logging
And run my project with:

Code: Select all

xrun --xscope bin/my_test.xe
Hope that helps.
User avatar
dsteinwe
XCore Addict
Posts: 144
Joined: Wed Jun 29, 2016 8:59 am

Post by dsteinwe »

Hello Peter,

thanks for your help! You helped me, to figure out, what the problem is. At first: It works perfect, now. I'm impressed, that the measured frequency is most of the time at 44100Hz. Awesome!

I've created a new project for your code. I've had to replace "tile[0]" to "tile[1]" and then it works for my board. Normal business running examples on this board ;-).

You have showed me that it is possible to measure the frequency exactly. After it, I wanted to understand, what I have done wrong. I copied my code over yours in the project and it still measures correctly. Uff! The makefile was wrong. The target was set to "XE216-512-TQ128-I20". In the new project it was set to "XCORE-200-EXPLORER".

I had configured "XE216-512-TQ128-I20", because the product description (https://www.xmos.com/support/boards?product=18334) says, that the silicon is a "XE216-512-TQ128". That seems to be a wrong conclusion. Unfortunately, I haven't understand it yet.

BTW, I have learned "debug_printf" and an alternative and shorter implementation :-).

One more question, because I'm not so familar with XC. Is it better to write tile[0] instead of stdcore[0] and if yes, why?
peter
XCore Addict
Posts: 230
Joined: Wed Mar 10, 2010 12:46 pm

Post by peter »

Hi dsteinwe,

Glad it seems to work for you now.

In terms of stdcore[] vs tile[], this is a change that was made a few years ago in the Tools 12. There are still a few references to stdcore that haven't been cleaned up in documentation and examples but they should be replaced with tile[].

Regards,

Peter
User avatar
dsteinwe
XCore Addict
Posts: 144
Joined: Wed Jun 29, 2016 8:59 am

Post by dsteinwe »

Here are my final results, what affects determining the sample rate:
  1. The build target in the makefile must set correctly. In my case "XCORE-200-EXPLORER" is required. Otherwise the timer runs to fast.
  2. printf() blocks receiving samples. In my case, I calculate values around 41000Hz instead of 44100Hz. If you are using lib_logging, as Peter described above, you get the correct output of 44100Hz. That means, that the call of printf() requires about 70ms without lib_logging. That's a lot!
peter
XCore Addict
Posts: 230
Joined: Wed Mar 10, 2010 12:46 pm

Post by peter »

Another aspect of using debug_printf() from lib_logging vs the standard library printf() will be the code size of the libraries pulled in. If you add

Code: Select all

XCC_MAP_FLAGS += -report
to your Makefile then you'll be able to see the code size difference when you change from a call to printf() to a call to debug_printf().

Clearly debug_printf() does not have all the nice formatting options of printf, but for a real-time system it is probably worth the tradeoff.
Post Reply