32x32 Pixel RGB LED Matrix [XCore beginner here]

XCore Project reviews, ideas, videos and proposals.
lberezy
Member
Posts: 13
Joined: Sat May 17, 2014 11:51 am

Post by lberezy »

Awesome work!

I'll have to check this out tonight and have a play with it. Sorry about my delay in replying, I'm right in the middle of exams at Uni.

I take it your LED panel arrived from China? How're you finding it?


User avatar
Folknology
XCore Legend
Posts: 1274
Joined: Thu Dec 10, 2009 10:20 pm
Contact:

Post by Folknology »

Yes mine arrived although I haven't had much chance to play with it. Just note that my test branch has now been re-written to work from the J3/8 headers leaving the slice socket completely free, which I need for other goodies ;-)
lberezy
Member
Posts: 13
Joined: Sat May 17, 2014 11:51 am

Post by lberezy »

Yeah there really wasn't any rhyme or reason as to why I chose the pinouts I did. Mainly what I could find that was spaced somewhat near each other on the board and seemed valid according to the port/pinmap in the documentation. That's great it frees up the slice connector now.


I'm going to start working on fixing the colour mixing when my exams are finished in a week, it'd be nice to see how far the Xcore can get colour bit-depth wise. Still not sure why one half of the panel fades towards blue as the bit depth is increased...
lberezy
Member
Posts: 13
Joined: Sat May 17, 2014 11:51 am

Post by lberezy »

So I've finally got back into it and made some headway with this project again.

I've managed to fix a few of the bugs and have implemented higher bit depths (currently 24 bpp, 8 per channel) with the help of a logic analyser I've access to.

The problem I've come up against is a lack of experience with XMOS C and how to parallelise tasks in an efficient manner.

Currently, the code is structured with a display client/server model. The Server has an interface that allows for refreshing the display (this shouldn't actually need to be called by the client and should happen as frequently as possible) and drawing to pixels in the framebuffer. The client, at this stage, processes graphical effects as fast as it possibly can and then after calls the server to update the display - this has the effect of coupling the complexity of the effect processing with the refresh rate (and thus also brightness) of the LED matrix (refreshDisplay(), the function to generate the driving waveforms for the LED matrix, is called in a loop along with and only after the rendering routine is complete).

I would like to implement the server similar to how it currently is, but with the server calling refreshDisplay() as frequently as possible by itself. The display client should perform its rendering on a timer (60 fps or so) and should not need to call refreshDisplay() at all - this decouples the LED matrix driving process from the graphical generation process.

Currently, the client/server model is implemented as a [[distributable]] task - does this mean this runs on a single core and thus the client/server block one another when they run?
How can I most efficiently run a process for updating the display and another process for generating graphical data? How do I share memory efficiently between these two endpoints (server only needs to read framebuffer)?

Here's a video of the current state of the project: https://www.youtube.com/watch?v=_44Uusw5HvE
Here's the github, just updated: https://github.com/lberezy/rgb-matrix-xcore
Hagrid
Active Member
Posts: 44
Joined: Mon Jul 29, 2013 4:33 am

Post by Hagrid »

I have recently completed a similar project controlling 20 metres of WS2812 LEDs from a complex pattern generator controlled via a DMX feed.

I don't think you want the tasks running as distributable here - as you said, they will end up being combined on the same core.

You should look at the double buffering example (see last section of the Programming Guide). The idea is to have two buffers - one in each half - and swapping them using movable pointers.

You may be able to split the rendering work across several cores by using multiple smaller buffers and subdividing the display area.
lberezy
Member
Posts: 13
Joined: Sat May 17, 2014 11:51 am

Post by lberezy »

Oh neat, thanks for the pointers (pun intended). I'm not sure how that will work though with the display refresh task needing to be run thousands of times a second and the graphics generation at only 60 FPS though.

Would someone be able to look through my current code and maybe modify it/rough out some structure as to how I could better distribute my problem? (XC is still pretty foreign to me and it seems like it'd be a simple problem to solve for someone with quite a bit of experience, given that it's an example in the XC programming guide).

Also, as for keeping my other drawing task running at 60 FPS or so, are there any "magical" or good ways to achieve this in XC? Can I just use a timer and check that 16.6 ms has elapsed before calling the draw routine again?
Guest

Post by Guest »

Have a look at the following example to give you details about implementation:

https://github.com/xcore/sw_led_tile_controller

This code is based on G series devices and with older version of tools.
lberezy
Member
Posts: 13
Joined: Sat May 17, 2014 11:51 am

Post by lberezy »

Oh wow, that's a huge help. Had no idea this existed.

I've been reading through the code and it's quite a jump in complexity. I'm not sure if anything in there is deprecated with relation to "modern" XC as I don't know what I don't know - is there anything immediately in there that I shouldn't be following or is done in an "old fashioned" way?
Guest

Post by Guest »

The XC programming concepts are same. But, have a look at the latest tools user guide as there are some additions to make the XC language much simpler and tools much easier to use.
damarco
Member
Posts: 10
Joined: Fri Feb 13, 2015 9:53 pm

Post by damarco »

Your code is unfortunately too slow. You mussst the capabilities of the hardware more use. Buffered port .. The Clock with x <1; to be generated;) Better 0xAAAAAAAA; buffered at ports. The processing in the output slows the output. Better to transmit the data from one array to the outputs. To use 8 bit ports for output could be disadvantageous. With a 1 bit port you could transferred 32 pixel.

I have analyzed the code of sw_led_sreens. In principle would still run. The Ethernet module must be exchanged and transferred the data to the correct channel.

Does anyone know a possibility of a 1 bit buffered port as the clock source to use? So I think if this is as an output and output data at the clock source for another buffered port.

The code below does not work anymore. You sobalt as the source port selected. Configure_clock_src (); or set_clock_src. If no output from the port possible.

Code: Select all

partout(p_spi_clk, 1, 1);
  partout(p_spi_oe, 1, 1);
  
  set_clock_off(b_clk);
  set_clock_off(b_data);
  set_clock_on(b_clk);
  set_clock_on(b_data);
  stop_clock(b_clk);
  stop_clock(b_data);
  set_clock_ref(b_clk);
  
  set_clock_div(b_clk, SPI_CLKDIV);
  set_port_clock(p_spi_clk, b_clk);

  set_clock_src(b_data, p_spi_clk);
  set_port_clock(p_led_out, b_data);
  set_port_clock(p_spi_ltch, b_clk);
  //set_pad_delay(p_spi_clk, 5);
  start_clock(b_data);
  start_clock(b_clk);
  //set_thread_fast_mode_on();

exactly the problem I am now with the old code. It is not generated clock. partout_timed or @ this is only valid for the port from the same clock outputs or for all? I have the strange case, it also affect other more outputs to.
Post Reply