ethernet_rx_server not working

Sub forums for various specialist XMOS applications. e.g. USB audio, motor control and robotics.
User avatar
gerrykurz
XCore Addict
Posts: 204
Joined: Sun Jun 01, 2014 10:25 pm

ethernet_rx_server not working

Post by gerrykurz »

I have an application that is using the avb endpoint 6.1.1 as a code base and therefore is using module ethernet 2.3.3.

In my application, there are three instances of the ethernet server full for three separate ports on three separate tiles.

On on port/tile, I have modified the mii_filter.xc file to essentially operate in promiscuous mode by setting the filter result to 1 on every packet received. The mac_set_custom_filter is set with a value of 1.

So the problem is that packets are being received by the mii_rx_pins task and the mii_filter task because I have set up a debug printf statement in the mii filter task.

However they are not getting forwarded to the rx client by the ethernet_rx_server task.

Code: Select all

        for (unsigned p=0; p<NUM_ETHERNET_PORTS; ++p) {
          int buf = mii_get_my_next_buf(rxmem_lp[p], rdptr_lp[p]);
          if (buf != 0 && mii_packet_get_stage(buf) == 1) {
            rdptr_lp[p] = mii_update_my_rdptr(rxmem_lp[p], rdptr_lp[p]);
            process_received_frame(buf, link, num_link, p);
            break;
So I set a break point at the start of this code and then single stepped through the mii_get_my_next_buf routine and it always returns 0 so the server task never gets to the process_received_frame function.

Code: Select all

mii_buffer_t mii_get_my_next_buf(mii_mempool_t mempool, int rdptr0)
{
  mempool_info_t *info = (mempool_info_t *) mempool;
  int *rdptr = (int *) rdptr0;
  int *wrptr = info->wrptr;

  if (rdptr == wrptr)
    return 0;

  return (mii_buffer_t) ((char *) rdptr + sizeof(malloc_hdr_t));
}
I have also set a break point at the process_recevied_frame call and the code never get there.

So even though I am getting packets, the server seems to be saying that there are no packets in the buffer.

Can someone explain how the rx_server routine works or what the problem might be.

Basically, I want to bypass all filtering and just have one ethernet rx client get all packets from the port. Can I change the rx_server code to do this.

Can someone tell me what this section of code in the rx server does?

Code: Select all

      case (int i=0;i<num_link;i++) service_link_cmd(link[i], i, cmd):
         if (cmd == ETHERNET_RX_FRAME_REQ ||
            cmd == ETHERNET_RX_TYPE_PAYLOAD_REQ ||
            cmd == ETHERNET_RX_FRAME_REQ_OFFSET2)
        {
          int rdIndex = link_status[i].rdIndex;
          int wrIndex = link_status[i].wrIndex;
          int new_rdIndex;

          if (link_status[i].wants_status_updates == 2) {
            // This currently only works for single master port implementations
            int status = ethernet_get_link_status(0);
            send_status_packet(link[i], 0, status);
            link_status[i].wants_status_updates = 1;
            if (rdIndex != wrIndex) {
              notify(link[i]);
            }
            else {
              link_status[i].notified = 0;
            }
          }
          else {
            if (rdIndex != wrIndex) {
              int buf = link_status[i].fifo[rdIndex];
              new_rdIndex=rdIndex+1;
              new_rdIndex *= (new_rdIndex != NUM_MII_RX_BUF);

              mac_rx_send_frame1(buf, link[i], cmd);

              if (get_and_dec_transmit_count(buf)==0)
              {
                mii_free(buf);
              }

              link_status[i].rdIndex = new_rdIndex;

              if (new_rdIndex != wrIndex) {
                notify(link[i]);
              }
              else {
                link_status[i].notified = 0;
                 }
            }
              else {
              // mac request without notification
            }
          }
        }
        break;


User avatar
gerrykurz
XCore Addict
Posts: 204
Joined: Sun Jun 01, 2014 10:25 pm

Post by gerrykurz »

I would really appreciate a simple explanation of how the three ethernet rx tasks interact and what the buffer memory structure is.

Can someone at xmos take a few minutes from your busy day and help me out with this.

It is very hard to understand this from just looking at the code.

Thanks.
User avatar
larry
Respected Member
Posts: 275
Joined: Fri Mar 12, 2010 6:03 pm

Post by larry »

In a nutshell, when MII receives a packet, it writes to the packet buffer and notifies the filtering thread.

Code: Select all

                if (mii_commit(buf, dptr))
                  c_filter <: buf;
Filtering is done by passing the complete frame to mac_custom_filter and writing its return value back into the packet buffer. This is the filtering result.

Code: Select all

                            mii_packet_set_filter_result(buf, filter_result);
Ethernet RX server picks up the result and compares it against the set mask. The idea is to allow implementing an efficient filtering function that may be returning a range of values. They just need to match some mask.

Code: Select all

      match = (custom_filter_mask[i] & result);
Here is a quote from the documentation:
The user must supply a definition of the function mac_custom_filter(). This function can inspect incoming packets in any manner suitable for applications and then returns either 0 if the packet is to be dropped or number which the clients can then use to determine which packets they wish to receive (using the client function mac_set_custom_filter().
For debugging I recommend breakpoints at different places of the MII, filter and server threads. Sometimes you won't be able to set a breakpoint, especially where there is plenty of compiler optimisation. In that case it's useful to add an assertion, e.g. __builtin_trap().
User avatar
gerrykurz
XCore Addict
Posts: 204
Joined: Sun Jun 01, 2014 10:25 pm

Post by gerrykurz »

Thanks Larry,

I think I have it mostly figured out now and it turns out the bug was in my code not the ethernet lib.